Dec 05 13:56:30 crc systemd[1]: Starting Kubernetes Kubelet... Dec 05 13:56:30 crc restorecon[4582]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:30 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 13:56:31 crc restorecon[4582]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 05 13:56:31 crc kubenswrapper[4858]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 13:56:31 crc kubenswrapper[4858]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 05 13:56:31 crc kubenswrapper[4858]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 13:56:31 crc kubenswrapper[4858]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 13:56:31 crc kubenswrapper[4858]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 05 13:56:31 crc kubenswrapper[4858]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.730261 4858 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733886 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733920 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733927 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733933 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733938 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733947 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733953 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733958 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733972 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733977 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733985 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733990 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.733994 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734002 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734013 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734020 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734026 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734033 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734038 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734043 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734054 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734059 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734065 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734075 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734089 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734097 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734103 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734110 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734115 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734121 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734126 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734134 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734138 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734145 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734168 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734172 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734177 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734181 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734185 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734189 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734192 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734196 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734200 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734204 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734207 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734211 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734215 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734220 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734224 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734228 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734232 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734236 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734240 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734244 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734248 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734251 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734255 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734259 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734263 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734275 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734278 4858 feature_gate.go:330] unrecognized feature gate: Example Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734282 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734286 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734290 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734294 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734298 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734301 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734305 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734309 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734313 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.734316 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734439 4858 flags.go:64] FLAG: --address="0.0.0.0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734450 4858 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734460 4858 flags.go:64] FLAG: --anonymous-auth="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734466 4858 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734476 4858 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734481 4858 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734490 4858 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734503 4858 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734507 4858 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734512 4858 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734517 4858 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734521 4858 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734528 4858 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734534 4858 flags.go:64] FLAG: --cgroup-root="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734539 4858 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734543 4858 flags.go:64] FLAG: --client-ca-file="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734547 4858 flags.go:64] FLAG: --cloud-config="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734552 4858 flags.go:64] FLAG: --cloud-provider="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734556 4858 flags.go:64] FLAG: --cluster-dns="[]" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734563 4858 flags.go:64] FLAG: --cluster-domain="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734569 4858 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734574 4858 flags.go:64] FLAG: --config-dir="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734578 4858 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734583 4858 flags.go:64] FLAG: --container-log-max-files="5" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734590 4858 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734594 4858 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734599 4858 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734605 4858 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734610 4858 flags.go:64] FLAG: --contention-profiling="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734616 4858 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734621 4858 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734626 4858 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734630 4858 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734637 4858 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734642 4858 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734647 4858 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734651 4858 flags.go:64] FLAG: --enable-load-reader="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734664 4858 flags.go:64] FLAG: --enable-server="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734669 4858 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734675 4858 flags.go:64] FLAG: --event-burst="100" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734680 4858 flags.go:64] FLAG: --event-qps="50" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734684 4858 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734688 4858 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734692 4858 flags.go:64] FLAG: --eviction-hard="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734698 4858 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734705 4858 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734709 4858 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734713 4858 flags.go:64] FLAG: --eviction-soft="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734717 4858 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734722 4858 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734726 4858 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734730 4858 flags.go:64] FLAG: --experimental-mounter-path="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734734 4858 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734738 4858 flags.go:64] FLAG: --fail-swap-on="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734745 4858 flags.go:64] FLAG: --feature-gates="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734751 4858 flags.go:64] FLAG: --file-check-frequency="20s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734756 4858 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734761 4858 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734766 4858 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734771 4858 flags.go:64] FLAG: --healthz-port="10248" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734776 4858 flags.go:64] FLAG: --help="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734781 4858 flags.go:64] FLAG: --hostname-override="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734787 4858 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734792 4858 flags.go:64] FLAG: --http-check-frequency="20s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734796 4858 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734801 4858 flags.go:64] FLAG: --image-credential-provider-config="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734805 4858 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734809 4858 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734814 4858 flags.go:64] FLAG: --image-service-endpoint="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734840 4858 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734846 4858 flags.go:64] FLAG: --kube-api-burst="100" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734855 4858 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734860 4858 flags.go:64] FLAG: --kube-api-qps="50" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734866 4858 flags.go:64] FLAG: --kube-reserved="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.734872 4858 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735020 4858 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735027 4858 flags.go:64] FLAG: --kubelet-cgroups="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735031 4858 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735035 4858 flags.go:64] FLAG: --lock-file="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735040 4858 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735044 4858 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735049 4858 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735067 4858 flags.go:64] FLAG: --log-json-split-stream="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735071 4858 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735075 4858 flags.go:64] FLAG: --log-text-split-stream="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735080 4858 flags.go:64] FLAG: --logging-format="text" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735086 4858 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735092 4858 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735099 4858 flags.go:64] FLAG: --manifest-url="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735104 4858 flags.go:64] FLAG: --manifest-url-header="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735111 4858 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735116 4858 flags.go:64] FLAG: --max-open-files="1000000" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735121 4858 flags.go:64] FLAG: --max-pods="110" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735126 4858 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735130 4858 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735134 4858 flags.go:64] FLAG: --memory-manager-policy="None" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735139 4858 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735143 4858 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735148 4858 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735152 4858 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735166 4858 flags.go:64] FLAG: --node-status-max-images="50" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735170 4858 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735174 4858 flags.go:64] FLAG: --oom-score-adj="-999" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735178 4858 flags.go:64] FLAG: --pod-cidr="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735182 4858 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735188 4858 flags.go:64] FLAG: --pod-manifest-path="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735193 4858 flags.go:64] FLAG: --pod-max-pids="-1" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735197 4858 flags.go:64] FLAG: --pods-per-core="0" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735201 4858 flags.go:64] FLAG: --port="10250" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735205 4858 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735209 4858 flags.go:64] FLAG: --provider-id="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735213 4858 flags.go:64] FLAG: --qos-reserved="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735226 4858 flags.go:64] FLAG: --read-only-port="10255" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735230 4858 flags.go:64] FLAG: --register-node="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735235 4858 flags.go:64] FLAG: --register-schedulable="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735239 4858 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735248 4858 flags.go:64] FLAG: --registry-burst="10" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735252 4858 flags.go:64] FLAG: --registry-qps="5" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735256 4858 flags.go:64] FLAG: --reserved-cpus="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735260 4858 flags.go:64] FLAG: --reserved-memory="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735266 4858 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735274 4858 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735279 4858 flags.go:64] FLAG: --rotate-certificates="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735283 4858 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735287 4858 flags.go:64] FLAG: --runonce="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735292 4858 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735296 4858 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735301 4858 flags.go:64] FLAG: --seccomp-default="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735305 4858 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735309 4858 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735313 4858 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735318 4858 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735322 4858 flags.go:64] FLAG: --storage-driver-password="root" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735326 4858 flags.go:64] FLAG: --storage-driver-secure="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735330 4858 flags.go:64] FLAG: --storage-driver-table="stats" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735334 4858 flags.go:64] FLAG: --storage-driver-user="root" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735338 4858 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735342 4858 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735347 4858 flags.go:64] FLAG: --system-cgroups="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735351 4858 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735358 4858 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735362 4858 flags.go:64] FLAG: --tls-cert-file="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735366 4858 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735372 4858 flags.go:64] FLAG: --tls-min-version="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735377 4858 flags.go:64] FLAG: --tls-private-key-file="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735382 4858 flags.go:64] FLAG: --topology-manager-policy="none" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735386 4858 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735391 4858 flags.go:64] FLAG: --topology-manager-scope="container" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735395 4858 flags.go:64] FLAG: --v="2" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735402 4858 flags.go:64] FLAG: --version="false" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735408 4858 flags.go:64] FLAG: --vmodule="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735414 4858 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735419 4858 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735527 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735532 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735537 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735541 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735545 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735549 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735552 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735556 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735560 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735564 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735568 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735572 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735576 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735579 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735583 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735587 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735591 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735595 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735599 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735603 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735606 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735610 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735613 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735618 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735622 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735625 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735629 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735632 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735635 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735639 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735642 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735646 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735651 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735654 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735657 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735661 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735664 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735667 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735671 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735675 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735678 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735681 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735685 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735689 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735694 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735698 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735702 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735706 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735709 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735713 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735716 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735720 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735723 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735727 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735730 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735735 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735738 4858 feature_gate.go:330] unrecognized feature gate: Example Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735742 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735745 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735749 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735753 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735757 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735761 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735765 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735770 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735775 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735779 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735783 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735787 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735791 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.735795 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.735801 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.741865 4858 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.741894 4858 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.741979 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.741992 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.741998 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742005 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742012 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742018 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742022 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742027 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742032 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742049 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742055 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742061 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742066 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742071 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742076 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742082 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742087 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742093 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742099 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742104 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742109 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742114 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742119 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742126 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742131 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742136 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742141 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742146 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742152 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742157 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742162 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742166 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742182 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742187 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742194 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742199 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742203 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742208 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742212 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742217 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742222 4858 feature_gate.go:330] unrecognized feature gate: Example Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742226 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742231 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742235 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742240 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742244 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742249 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742253 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742257 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742262 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742266 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742273 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742278 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742282 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742287 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742291 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742296 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742300 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742305 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742309 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742314 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742318 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742322 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742326 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742330 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742334 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742339 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742343 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742347 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742351 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742357 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.742364 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742671 4858 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742677 4858 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742682 4858 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742688 4858 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742693 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742699 4858 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742703 4858 feature_gate.go:330] unrecognized feature gate: Example Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742708 4858 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742712 4858 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742716 4858 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742720 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742724 4858 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742729 4858 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742733 4858 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742737 4858 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742741 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742746 4858 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742750 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742754 4858 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742759 4858 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742763 4858 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742767 4858 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742771 4858 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742776 4858 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742780 4858 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742784 4858 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742788 4858 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742793 4858 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742797 4858 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742802 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742806 4858 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742810 4858 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742815 4858 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742819 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742841 4858 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742846 4858 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742851 4858 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742857 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742862 4858 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742866 4858 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742871 4858 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742875 4858 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742881 4858 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742886 4858 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742891 4858 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742896 4858 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742900 4858 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742905 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742909 4858 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742913 4858 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742917 4858 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742922 4858 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742927 4858 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742932 4858 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742936 4858 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742941 4858 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742945 4858 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742981 4858 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742986 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742990 4858 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742995 4858 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.742999 4858 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743003 4858 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743008 4858 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743012 4858 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743017 4858 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743023 4858 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743027 4858 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743032 4858 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743036 4858 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.743042 4858 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.743048 4858 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.743455 4858 server.go:940] "Client rotation is on, will bootstrap in background" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.746883 4858 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.747051 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.747629 4858 server.go:997] "Starting client certificate rotation" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.747657 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.748079 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 13:22:14.750501732 +0000 UTC Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.748156 4858 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 743h25m43.002348771s for next certificate rotation Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.759682 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.761555 4858 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.773458 4858 log.go:25] "Validated CRI v1 runtime API" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.796784 4858 log.go:25] "Validated CRI v1 image API" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.798768 4858 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.800945 4858 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-05-13-51-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.800988 4858 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.814439 4858 manager.go:217] Machine: {Timestamp:2025-12-05 13:56:31.812285005 +0000 UTC m=+0.359883164 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:15431bde-3216-4207-8a7b-b80a053431b8 BootID:74cf7700-2214-426c-b823-5d8073a4da4d Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:05:61:e7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:05:61:e7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:21:26:d5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:46:2c:2b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:29:75:61 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3b:1a:29 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:46:22:64:3d:82:f4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:26:5d:00:6d:35:80 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.814624 4858 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.814791 4858 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815046 4858 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815248 4858 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815282 4858 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815468 4858 topology_manager.go:138] "Creating topology manager with none policy" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815477 4858 container_manager_linux.go:303] "Creating device plugin manager" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815603 4858 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815623 4858 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815882 4858 state_mem.go:36] "Initialized new in-memory state store" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.815957 4858 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.818007 4858 kubelet.go:418] "Attempting to sync node with API server" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.818027 4858 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.818042 4858 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.818055 4858 kubelet.go:324] "Adding apiserver pod source" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.818066 4858 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.819780 4858 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.820104 4858 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821253 4858 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821776 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821798 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821805 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821811 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821824 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821843 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821850 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821861 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821869 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821894 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821912 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.821920 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.822134 4858 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.822479 4858 server.go:1280] "Started kubelet" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.823334 4858 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.823432 4858 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.823763 4858 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 05 13:56:31 crc systemd[1]: Started Kubernetes Kubelet. Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.824384 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.826813 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.826955 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.827494 4858 server.go:460] "Adding debug handlers to kubelet server" Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.833159 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.833253 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.833900 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e564ccc647edb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 13:56:31.822454491 +0000 UTC m=+0.370052630,LastTimestamp:2025-12-05 13:56:31.822454491 +0000 UTC m=+0.370052630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.835918 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.835978 4858 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.836382 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:29:13.487136238 +0000 UTC Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.836934 4858 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.836961 4858 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.837075 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.837229 4858 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.836948 4858 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 5h32m41.650196715s for next certificate rotation Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.839419 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.839380 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.839569 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.840747 4858 factory.go:55] Registering systemd factory Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.840905 4858 factory.go:221] Registration of the systemd container factory successfully Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.841353 4858 factory.go:153] Registering CRI-O factory Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.841380 4858 factory.go:221] Registration of the crio container factory successfully Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.841475 4858 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.841604 4858 factory.go:103] Registering Raw factory Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.841636 4858 manager.go:1196] Started watching for new ooms in manager Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.846275 4858 manager.go:319] Starting recovery of all containers Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.857727 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858085 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858104 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858118 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858132 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858147 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858162 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858177 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858194 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858208 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858223 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858236 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858254 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858272 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858287 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858300 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858316 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858329 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858345 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858358 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858370 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858382 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858396 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858410 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858422 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858460 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858478 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858494 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858507 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858522 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858536 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858550 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.858564 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859181 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859271 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859307 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859339 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859355 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859389 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859443 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859458 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859492 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859518 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859531 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859542 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859573 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859588 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859615 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859630 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859660 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859687 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859701 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.859735 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.864370 4858 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.864747 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.865238 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.865532 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.866635 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.867307 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.868420 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.868884 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.868995 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.869078 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.869155 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.869690 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.869816 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.869953 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.870039 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.870123 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872365 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872495 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872608 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872695 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872777 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872878 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.872969 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873064 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873168 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873257 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873350 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873459 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873545 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873635 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873719 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873796 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873917 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.867508 4858 manager.go:324] Recovery completed Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.873997 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874203 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874265 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874306 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874324 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874340 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874356 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874374 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874393 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874409 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874426 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874443 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874458 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874473 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874488 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874506 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874522 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874536 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874553 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874579 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874599 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874618 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874635 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874650 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874665 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874682 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874699 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874715 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874732 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874746 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874764 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874781 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874800 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874816 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874860 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874874 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874889 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874903 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874916 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874933 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874951 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874970 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.874992 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875012 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875029 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875047 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875064 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875084 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875098 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875114 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875131 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875145 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875161 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875177 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875191 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875208 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875222 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875260 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875290 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875310 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875330 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875349 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875367 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875386 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875406 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875421 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875435 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875450 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875464 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875478 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875493 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875542 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875561 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875576 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875611 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875671 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875698 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875714 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875727 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875742 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875757 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875771 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875792 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875807 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875820 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875936 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875960 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.875990 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876014 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876034 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876051 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876068 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876084 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876103 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876118 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876134 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876150 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876166 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876186 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876209 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876229 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876248 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876267 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876288 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876307 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876326 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876348 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876369 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876391 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876410 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876426 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876442 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876457 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876473 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876492 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876509 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876533 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876554 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876574 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876597 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876617 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876638 4858 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876660 4858 reconstruct.go:97] "Volume reconstruction finished" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.876676 4858 reconciler.go:26] "Reconciler: start to sync state" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.886121 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.891861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.891906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.891920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.892584 4858 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.892601 4858 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.892626 4858 state_mem.go:36] "Initialized new in-memory state store" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.895081 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.897961 4858 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.898007 4858 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.898035 4858 kubelet.go:2335] "Starting kubelet main sync loop" Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.898092 4858 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 05 13:56:31 crc kubenswrapper[4858]: W1205 13:56:31.898566 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.898625 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.921248 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e564ccc647edb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 13:56:31.822454491 +0000 UTC m=+0.370052630,LastTimestamp:2025-12-05 13:56:31.822454491 +0000 UTC m=+0.370052630,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.923056 4858 policy_none.go:49] "None policy: Start" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.923801 4858 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.923839 4858 state_mem.go:35] "Initializing new in-memory state store" Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.938049 4858 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.981213 4858 manager.go:334] "Starting Device Plugin manager" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.981268 4858 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.981286 4858 server.go:79] "Starting device plugin registration server" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.982104 4858 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.982122 4858 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.984161 4858 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.984268 4858 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.984284 4858 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 05 13:56:31 crc kubenswrapper[4858]: E1205 13:56:31.990644 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.998386 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.998478 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.999328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.999361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.999374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:31 crc kubenswrapper[4858]: I1205 13:56:31.999542 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.000488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.000513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.000524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.001529 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.001576 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.001584 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.001605 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.001547 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002541 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002796 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002929 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.002953 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003543 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003665 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003923 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.003945 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004481 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.004992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.005000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.005200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.005220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.005229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: E1205 13:56:32.040186 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079321 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079490 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079524 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079613 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.079750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.085492 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.086471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.086498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.086528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.086548 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:32 crc kubenswrapper[4858]: E1205 13:56:32.086958 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181703 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181746 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181809 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.181948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182343 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182494 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182473 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182790 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182866 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.182966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.183019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.183052 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.183081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.287375 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.289104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.289177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.289192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.289231 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:32 crc kubenswrapper[4858]: E1205 13:56:32.289976 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.339625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.349158 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.386765 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.394348 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.398130 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 13:56:32 crc kubenswrapper[4858]: E1205 13:56:32.441716 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Dec 05 13:56:32 crc kubenswrapper[4858]: W1205 13:56:32.542568 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1b43616cfc3d5d298857eb0e2a8c20c6219ea1d7337be286b34e6da3411f18e7 WatchSource:0}: Error finding container 1b43616cfc3d5d298857eb0e2a8c20c6219ea1d7337be286b34e6da3411f18e7: Status 404 returned error can't find the container with id 1b43616cfc3d5d298857eb0e2a8c20c6219ea1d7337be286b34e6da3411f18e7 Dec 05 13:56:32 crc kubenswrapper[4858]: W1205 13:56:32.543399 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-77bdbd37c08af962c1d1adc60a1d48ccc509f30a5f699dc2524e07d46542e16e WatchSource:0}: Error finding container 77bdbd37c08af962c1d1adc60a1d48ccc509f30a5f699dc2524e07d46542e16e: Status 404 returned error can't find the container with id 77bdbd37c08af962c1d1adc60a1d48ccc509f30a5f699dc2524e07d46542e16e Dec 05 13:56:32 crc kubenswrapper[4858]: W1205 13:56:32.544208 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-12356da868808219f8bbb2af3481f9fb3c73fefeaf478d4ce4e59e72dbfc727c WatchSource:0}: Error finding container 12356da868808219f8bbb2af3481f9fb3c73fefeaf478d4ce4e59e72dbfc727c: Status 404 returned error can't find the container with id 12356da868808219f8bbb2af3481f9fb3c73fefeaf478d4ce4e59e72dbfc727c Dec 05 13:56:32 crc kubenswrapper[4858]: W1205 13:56:32.548981 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5ea8123bbca67cecd09d1adce5e162bac749066360cdb707b4ac46788f8c9e47 WatchSource:0}: Error finding container 5ea8123bbca67cecd09d1adce5e162bac749066360cdb707b4ac46788f8c9e47: Status 404 returned error can't find the container with id 5ea8123bbca67cecd09d1adce5e162bac749066360cdb707b4ac46788f8c9e47 Dec 05 13:56:32 crc kubenswrapper[4858]: W1205 13:56:32.549675 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-090407235cd2609f3f89490a9f8c03db0b2bfc8707fa5a91295f2e0bf9718110 WatchSource:0}: Error finding container 090407235cd2609f3f89490a9f8c03db0b2bfc8707fa5a91295f2e0bf9718110: Status 404 returned error can't find the container with id 090407235cd2609f3f89490a9f8c03db0b2bfc8707fa5a91295f2e0bf9718110 Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.690404 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.692129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.692378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.692394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.692433 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:32 crc kubenswrapper[4858]: E1205 13:56:32.692754 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 05 13:56:32 crc kubenswrapper[4858]: W1205 13:56:32.720444 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:32 crc kubenswrapper[4858]: E1205 13:56:32.720516 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.825588 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.901675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"12356da868808219f8bbb2af3481f9fb3c73fefeaf478d4ce4e59e72dbfc727c"} Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.902322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"090407235cd2609f3f89490a9f8c03db0b2bfc8707fa5a91295f2e0bf9718110"} Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.903169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5ea8123bbca67cecd09d1adce5e162bac749066360cdb707b4ac46788f8c9e47"} Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.903840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77bdbd37c08af962c1d1adc60a1d48ccc509f30a5f699dc2524e07d46542e16e"} Dec 05 13:56:32 crc kubenswrapper[4858]: I1205 13:56:32.904685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b43616cfc3d5d298857eb0e2a8c20c6219ea1d7337be286b34e6da3411f18e7"} Dec 05 13:56:33 crc kubenswrapper[4858]: W1205 13:56:33.207389 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:33 crc kubenswrapper[4858]: E1205 13:56:33.207462 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:33 crc kubenswrapper[4858]: E1205 13:56:33.242355 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Dec 05 13:56:33 crc kubenswrapper[4858]: W1205 13:56:33.335867 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:33 crc kubenswrapper[4858]: E1205 13:56:33.335995 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:33 crc kubenswrapper[4858]: W1205 13:56:33.373963 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:33 crc kubenswrapper[4858]: E1205 13:56:33.374021 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.493167 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.495277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.495307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.495318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.495343 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:33 crc kubenswrapper[4858]: E1205 13:56:33.495755 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.825761 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.909788 4858 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606" exitCode=0 Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.909905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.909964 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.911342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.911374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.911387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.911744 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c" exitCode=0 Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.911853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.911948 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.913050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.913073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.913084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.914286 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467" exitCode=0 Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.914377 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.914392 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.915108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.915197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.915267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.917133 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.918099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.918133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.918147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.919072 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.919116 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.919120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.919208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.919220 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.919914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.920005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.920072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.921892 4858 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="45ee1e3e588b099ea3b0edf02ba290d666b2ce1625c5f39e3d14e8658816373a" exitCode=0 Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.921927 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"45ee1e3e588b099ea3b0edf02ba290d666b2ce1625c5f39e3d14e8658816373a"} Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.922075 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.923256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.923283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:33 crc kubenswrapper[4858]: I1205 13:56:33.923308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.825155 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:34 crc kubenswrapper[4858]: W1205 13:56:34.825526 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:34 crc kubenswrapper[4858]: E1205 13:56:34.825723 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:34 crc kubenswrapper[4858]: E1205 13:56:34.844119 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.928753 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.928779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"364129393fe733afe95e5aca07c0ff9db100dcedab449f4f50db499b90046a1d"} Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.930258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.930343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.930372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.931586 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce"} Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.934188 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.934186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a"} Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.934006 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a" exitCode=0 Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.935454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.935640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.935690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.937981 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.937980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979"} Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.938943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.938961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:34 crc kubenswrapper[4858]: I1205 13:56:34.938972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.096178 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.097185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.097228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.097274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.097311 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:35 crc kubenswrapper[4858]: E1205 13:56:35.097901 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 05 13:56:35 crc kubenswrapper[4858]: W1205 13:56:35.249307 4858 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:35 crc kubenswrapper[4858]: E1205 13:56:35.249655 4858 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.514067 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.825185 4858 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.942772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.942866 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.942967 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.943812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.943858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.943869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.946023 4858 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6" exitCode=0 Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.946073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.946165 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.946807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.946850 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.946862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.950771 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.951128 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.952798 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953266 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953331 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953351 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695"} Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.953533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.954370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.954398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.954411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.955288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.955317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:35 crc kubenswrapper[4858]: I1205 13:56:35.955330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.956863 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.956932 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.956799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4"} Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957003 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670"} Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb"} Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0"} Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957082 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957161 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.957885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.958147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.958195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:36 crc kubenswrapper[4858]: I1205 13:56:36.958215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.487740 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.488082 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.489610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.489645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.489655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.494460 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.941251 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.962942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae"} Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.963041 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.963053 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.964603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.964638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.964650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.964663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.964690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:37 crc kubenswrapper[4858]: I1205 13:56:37.964699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.041908 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.042024 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.042053 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.042959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.042986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.042995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.298748 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.299831 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.299860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.299871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.299894 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.515051 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.515146 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.965397 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.965431 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.965449 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.966609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.966645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.966655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.966914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.966954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:38 crc kubenswrapper[4858]: I1205 13:56:38.966966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.159598 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.159770 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.161277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.161315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.161327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.387117 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.387309 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.387359 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.388395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.388422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.388432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.678577 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.891626 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.891799 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.893112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.893152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.893162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.969742 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.970455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.970484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:40 crc kubenswrapper[4858]: I1205 13:56:40.970495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:41 crc kubenswrapper[4858]: I1205 13:56:41.154456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:41 crc kubenswrapper[4858]: I1205 13:56:41.154645 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:41 crc kubenswrapper[4858]: I1205 13:56:41.155737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:41 crc kubenswrapper[4858]: I1205 13:56:41.155761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:41 crc kubenswrapper[4858]: I1205 13:56:41.155770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:41 crc kubenswrapper[4858]: E1205 13:56:41.990783 4858 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 13:56:44 crc kubenswrapper[4858]: I1205 13:56:44.357362 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 05 13:56:44 crc kubenswrapper[4858]: I1205 13:56:44.357552 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:44 crc kubenswrapper[4858]: I1205 13:56:44.358583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:44 crc kubenswrapper[4858]: I1205 13:56:44.358616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:44 crc kubenswrapper[4858]: I1205 13:56:44.358625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:45 crc kubenswrapper[4858]: I1205 13:56:45.841722 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 13:56:45 crc kubenswrapper[4858]: I1205 13:56:45.841780 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 13:56:45 crc kubenswrapper[4858]: I1205 13:56:45.851450 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 13:56:45 crc kubenswrapper[4858]: I1205 13:56:45.851507 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.052934 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.053203 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.054629 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.054806 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.054956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.055000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.055011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.060072 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.514810 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.514902 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.989047 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.989386 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.989430 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.989891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.989916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:48 crc kubenswrapper[4858]: I1205 13:56:48.989925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.679465 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.679554 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 13:56:50 crc kubenswrapper[4858]: E1205 13:56:50.840094 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.843383 4858 trace.go:236] Trace[481005719]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 13:56:36.269) (total time: 14573ms): Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[481005719]: ---"Objects listed" error: 14573ms (13:56:50.843) Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[481005719]: [14.573638422s] [14.573638422s] END Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.843417 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.843491 4858 trace.go:236] Trace[1366847627]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 13:56:38.985) (total time: 11858ms): Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[1366847627]: ---"Objects listed" error: 11858ms (13:56:50.843) Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[1366847627]: [11.858085653s] [11.858085653s] END Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.843514 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.843984 4858 trace.go:236] Trace[29193303]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 13:56:40.541) (total time: 10302ms): Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[29193303]: ---"Objects listed" error: 10302ms (13:56:50.843) Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[29193303]: [10.302389986s] [10.302389986s] END Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.844000 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.845190 4858 trace.go:236] Trace[85576209]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 13:56:35.970) (total time: 14874ms): Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[85576209]: ---"Objects listed" error: 14874ms (13:56:50.844) Dec 05 13:56:50 crc kubenswrapper[4858]: Trace[85576209]: [14.874381839s] [14.874381839s] END Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.845231 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 13:56:50 crc kubenswrapper[4858]: E1205 13:56:50.845369 4858 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 05 13:56:50 crc kubenswrapper[4858]: I1205 13:56:50.845629 4858 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.159180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.829191 4858 apiserver.go:52] "Watching apiserver" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.831765 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.832057 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.832358 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.832443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.832457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.832560 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.832586 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.832597 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.833166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.833199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.833238 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.834590 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.834994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.835038 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.835046 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.835040 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.835165 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.835312 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.835572 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.837765 4858 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.837802 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851811 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851878 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851892 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851923 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.851986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852000 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852016 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852047 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852108 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852123 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852115 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852197 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852213 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852228 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852257 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852334 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852404 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852499 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852523 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852560 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852612 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852729 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852758 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852779 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852800 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852824 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852880 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852971 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852987 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853020 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853064 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853096 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853154 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853185 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853200 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853243 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853273 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853319 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853339 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853393 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854476 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854514 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854686 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854760 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854778 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854793 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854808 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854826 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854884 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855009 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855025 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855205 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855222 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855238 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855254 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855318 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855348 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855396 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855452 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856061 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856094 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856158 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856180 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856301 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856319 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856363 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856390 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856405 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856419 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856457 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856539 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856589 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856650 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856679 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856708 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856964 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856979 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857073 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857162 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857206 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857254 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857331 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857464 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857500 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857526 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857541 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858615 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852557 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852651 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852844 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852948 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.852994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853044 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853895 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853975 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.853996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854349 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854402 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854463 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854693 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.854927 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.860745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.861040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855253 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.855779 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856080 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856475 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856741 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856982 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857077 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857177 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.856058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857535 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.857823 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858123 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858275 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858283 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858488 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858499 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858745 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858951 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.858960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859025 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859258 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859418 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859821 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.861787 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.861821 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859909 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.859924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.860106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.860228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.862865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.860459 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.863069 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.863640 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.863798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.864340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.864428 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.864721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.865103 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.865207 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.866627 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.866695 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.867062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.867424 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.867857 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.868010 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.868432 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.868718 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.869146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.869434 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.869624 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.869652 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.870045 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.870067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.870254 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.870486 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.863551 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.870826 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.871355 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.871632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.871881 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.872136 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.872613 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.872887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.872941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.872964 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.872988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873052 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873159 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873633 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.873967 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874172 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874501 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874638 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874649 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874949 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875370 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875511 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875530 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875548 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875698 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.875919 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:56:52.374754462 +0000 UTC m=+20.922352601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.875963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876032 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.874941 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876217 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876258 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876383 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876472 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876530 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876538 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876553 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876583 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.876707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.877306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.877820 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.877859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.877954 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878028 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878241 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878557 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878826 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878924 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878972 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879033 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879057 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879111 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879192 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879357 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879387 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879432 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879485 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879538 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879561 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879582 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879604 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879746 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879917 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880120 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880177 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880204 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880256 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880303 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880327 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880372 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880475 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880491 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880503 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880516 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880529 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880596 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880611 4858 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880625 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880639 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880651 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880664 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880678 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880690 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880706 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880717 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880731 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880746 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880758 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880770 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880782 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880796 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880808 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880851 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880865 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880879 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880893 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880905 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880918 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880931 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880943 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880954 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880966 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880978 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880991 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881003 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881016 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881030 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881042 4858 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881059 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.878721 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879338 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879513 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879423 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879743 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.879853 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880662 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.880975 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881273 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881546 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.881892 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882032 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882321 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882606 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.882657 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.883693 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.884369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.884409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.884527 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.884920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885166 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885250 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885268 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885400 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885648 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885656 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.885897 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886203 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886325 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886350 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886435 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886541 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886661 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886887 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.886902 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887099 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887344 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887374 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887387 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887399 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887411 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887423 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887434 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887446 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887457 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887471 4858 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887488 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887499 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.887501 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.887586 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:52.387571332 +0000 UTC m=+20.935169471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.888162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.888234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.888330 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.888391 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:52.388373154 +0000 UTC m=+20.935971403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.888609 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.888997 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.889117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.889160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.889442 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.889537 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.889965 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.890339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891305 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891482 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.887511 4858 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891601 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891616 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891631 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891644 4858 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891657 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891669 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891681 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891694 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891705 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891716 4858 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891729 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891740 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891752 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891763 4858 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891774 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891786 4858 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891796 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891808 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891826 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891854 4858 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891866 4858 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891880 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891892 4858 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891903 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891915 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891926 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891936 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891946 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891956 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891966 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891978 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891989 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.891999 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.892010 4858 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.892021 4858 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.892032 4858 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.892042 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.892255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.894016 4858 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.907360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.892053 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912799 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912824 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912854 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912868 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912882 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912894 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912906 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912917 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912929 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912941 4858 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912954 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912969 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912982 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.912994 4858 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913010 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913022 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913044 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913055 4858 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913067 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913078 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913090 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913103 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913114 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913127 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913138 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913149 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913160 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913171 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913184 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913197 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.913203 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.913221 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.913232 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.913280 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:52.413262504 +0000 UTC m=+20.960860643 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913208 4858 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913277 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913309 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913321 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913332 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913341 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913348 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913356 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913364 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913762 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913773 4858 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913780 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913790 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913798 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.913806 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.915614 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.916789 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.918055 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.918076 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.918088 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:51 crc kubenswrapper[4858]: E1205 13:56:51.918160 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:52.418113798 +0000 UTC m=+20.965711937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.918309 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.918964 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.922872 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.923234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.926234 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.928492 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.928921 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.935608 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.936574 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.937568 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.938216 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.939495 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.939910 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.940097 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.941135 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.941645 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.942773 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.943245 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.943737 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.944591 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.945189 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.946173 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.946558 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.947115 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.948322 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.948780 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.949092 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.949810 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.950228 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.951324 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.951712 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.952344 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.953435 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.953949 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.954896 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.955334 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.956145 4858 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.956243 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.957786 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.958356 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.958596 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.959309 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.960719 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.961405 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.962339 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.963031 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.964153 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.964645 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.965656 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.966527 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.967482 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.967734 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.968279 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.969294 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.969895 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.971060 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.971702 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.972548 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.973046 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.973950 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.974589 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.975323 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.975554 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.983646 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.992459 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:51 crc kubenswrapper[4858]: I1205 13:56:51.997790 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:51.999995 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729" exitCode=255 Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.000370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729"} Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.000489 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.008262 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.008336 4858 scope.go:117] "RemoveContainer" containerID="ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.010022 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015308 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015608 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015625 4858 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015647 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015707 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015722 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015751 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015763 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015772 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015781 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015789 4858 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015809 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015850 4858 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015865 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015873 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015881 4858 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015890 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015917 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015927 4858 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015936 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015947 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015958 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.015969 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016007 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016018 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016026 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016034 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016086 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016099 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016108 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016116 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016124 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016131 4858 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016139 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016185 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016216 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016224 4858 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016233 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016263 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016274 4858 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016293 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016303 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016355 4858 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016364 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016373 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016390 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016398 4858 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016425 4858 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016434 4858 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016442 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016450 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016468 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016481 4858 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016539 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016557 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016565 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016574 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016581 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016608 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016617 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016625 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.016466 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.019047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.021411 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.032421 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.043720 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.056172 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.065656 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.076212 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.084383 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.098946 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.110396 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.119505 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.131114 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.143381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.151892 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.166702 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 13:56:52 crc kubenswrapper[4858]: W1205 13:56:52.167680 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-334d26ed38eb52e44c58b948e6e67e13cfe95da31794e4b2f0c49c9c198b1a44 WatchSource:0}: Error finding container 334d26ed38eb52e44c58b948e6e67e13cfe95da31794e4b2f0c49c9c198b1a44: Status 404 returned error can't find the container with id 334d26ed38eb52e44c58b948e6e67e13cfe95da31794e4b2f0c49c9c198b1a44 Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.420198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.420317 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.420346 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420373 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:56:53.420341894 +0000 UTC m=+21.967940063 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.420449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420459 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420500 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420517 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:53.420499928 +0000 UTC m=+21.968098147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420524 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420539 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420577 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.420540 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420582 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:53.42057139 +0000 UTC m=+21.968169609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420622 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:53.420613391 +0000 UTC m=+21.968211640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.420996 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.421013 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.421023 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:52 crc kubenswrapper[4858]: E1205 13:56:52.421073 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:53.421062323 +0000 UTC m=+21.968660542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.561888 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-d85q7"] Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.562309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.575187 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-87w6x"] Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.575477 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.575865 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.576700 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.577048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.577997 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.578220 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.584483 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.584633 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.592920 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.612346 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.628122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.647014 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.661490 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.688407 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.705946 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.717478 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.723089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvnz\" (UniqueName: \"kubernetes.io/projected/fdf51fde-d54f-4e8a-9a66-8abf33dce5e0-kube-api-access-kzvnz\") pod \"node-resolver-d85q7\" (UID: \"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\") " pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.723139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a69d20a-c80f-4814-9cf2-fce9ade638c5-serviceca\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.723162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnx5t\" (UniqueName: \"kubernetes.io/projected/9a69d20a-c80f-4814-9cf2-fce9ade638c5-kube-api-access-vnx5t\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.723206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a69d20a-c80f-4814-9cf2-fce9ade638c5-host\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.723255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fdf51fde-d54f-4e8a-9a66-8abf33dce5e0-hosts-file\") pod \"node-resolver-d85q7\" (UID: \"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\") " pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.732994 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.742487 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.756344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.812025 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.824355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvnz\" (UniqueName: \"kubernetes.io/projected/fdf51fde-d54f-4e8a-9a66-8abf33dce5e0-kube-api-access-kzvnz\") pod \"node-resolver-d85q7\" (UID: \"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\") " pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.824397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnx5t\" (UniqueName: \"kubernetes.io/projected/9a69d20a-c80f-4814-9cf2-fce9ade638c5-kube-api-access-vnx5t\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.824415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a69d20a-c80f-4814-9cf2-fce9ade638c5-serviceca\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.824448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a69d20a-c80f-4814-9cf2-fce9ade638c5-host\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.824463 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fdf51fde-d54f-4e8a-9a66-8abf33dce5e0-hosts-file\") pod \"node-resolver-d85q7\" (UID: \"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\") " pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.824527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fdf51fde-d54f-4e8a-9a66-8abf33dce5e0-hosts-file\") pod \"node-resolver-d85q7\" (UID: \"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\") " pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.825235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a69d20a-c80f-4814-9cf2-fce9ade638c5-host\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.828583 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9a69d20a-c80f-4814-9cf2-fce9ade638c5-serviceca\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.831107 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.854517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnx5t\" (UniqueName: \"kubernetes.io/projected/9a69d20a-c80f-4814-9cf2-fce9ade638c5-kube-api-access-vnx5t\") pod \"node-ca-87w6x\" (UID: \"9a69d20a-c80f-4814-9cf2-fce9ade638c5\") " pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.859344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvnz\" (UniqueName: \"kubernetes.io/projected/fdf51fde-d54f-4e8a-9a66-8abf33dce5e0-kube-api-access-kzvnz\") pod \"node-resolver-d85q7\" (UID: \"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\") " pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.874289 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d85q7" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.885471 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.889099 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-87w6x" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.926339 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.961371 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:52 crc kubenswrapper[4858]: I1205 13:56:52.979521 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.007122 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.009343 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d85q7" event={"ID":"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0","Type":"ContainerStarted","Data":"b8964ad9fa33ef3b42c2992645126d944a7acf475675906092870806ab75f7fe"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.014759 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.014810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.014854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"334d26ed38eb52e44c58b948e6e67e13cfe95da31794e4b2f0c49c9c198b1a44"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.020209 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-vtgkn"] Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.020794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.021358 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6a7e89a4191fbea92d76c9d1712e5958650406972d24f51fc5d53e68dbdfd18f"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.022776 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.023025 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.023336 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.023368 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.029773 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fjdj6"] Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.030085 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-q8fqr"] Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.030510 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtntj"] Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.030634 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.030790 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.031393 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.032395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.032461 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cbe1c03cf9748fb119cbb37b47154ee0b1b13e16bd91304aeb11fe6e48ac3be5"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.035594 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.035649 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.035703 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.035797 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.036123 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.036195 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.036360 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.036391 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.036528 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.036638 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.038536 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.038685 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.038734 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.042688 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.038740 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.038801 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.046614 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.075207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.075917 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.076257 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.078612 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-87w6x" event={"ID":"9a69d20a-c80f-4814-9cf2-fce9ade638c5","Type":"ContainerStarted","Data":"d1708fd4c813ac35210052be4c9e93f236b88dd466964310450de3febfcfdbf6"} Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.107488 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.123795 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wl6f\" (UniqueName: \"kubernetes.io/projected/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-kube-api-access-9wl6f\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-k8s-cni-cncf-io\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-node-log\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-os-release\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-cni-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-systemd-units\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-cnibin\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr66j\" (UniqueName: \"kubernetes.io/projected/1b855b1c-b9bc-4249-80a9-87108585857f-kube-api-access-sr66j\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126558 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-socket-dir-parent\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-netns\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126601 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-os-release\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126637 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54d2\" (UniqueName: \"kubernetes.io/projected/19dac4e8-493c-456c-b8ea-cc1e48b9867c-kube-api-access-l54d2\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-netd\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126679 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19dac4e8-493c-456c-b8ea-cc1e48b9867c-cni-binary-copy\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-daemon-config\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126716 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-etc-kubernetes\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-systemd\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126767 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-slash\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-ovn\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126795 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-config\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-script-lib\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-kubelet\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-kubelet\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-var-lib-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ab8742a-625e-4bb8-9329-31f39a34fe48-rootfs\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.126993 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab8742a-625e-4bb8-9329-31f39a34fe48-proxy-tls\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127007 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krnc2\" (UniqueName: \"kubernetes.io/projected/2ab8742a-625e-4bb8-9329-31f39a34fe48-kube-api-access-krnc2\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127022 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-system-cni-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-cnibin\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-cni-bin\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-cni-multus\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-system-cni-dir\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-env-overrides\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127152 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-netns\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127221 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-log-socket\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovn-node-metrics-cert\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127253 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b855b1c-b9bc-4249-80a9-87108585857f-cni-binary-copy\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127291 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1b855b1c-b9bc-4249-80a9-87108585857f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-hostroot\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-conf-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab8742a-625e-4bb8-9329-31f39a34fe48-mcd-auth-proxy-config\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127349 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-multus-certs\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-etc-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.127413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-bin\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.137219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.159192 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.187782 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.205146 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.226718 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.228886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-hostroot\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.228933 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-hostroot\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-conf-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-conf-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab8742a-625e-4bb8-9329-31f39a34fe48-mcd-auth-proxy-config\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-multus-certs\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab8742a-625e-4bb8-9329-31f39a34fe48-mcd-auth-proxy-config\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.229264 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-multus-certs\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230172 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-etc-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-bin\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-etc-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wl6f\" (UniqueName: \"kubernetes.io/projected/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-kube-api-access-9wl6f\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-bin\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230476 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-node-log\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-node-log\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-os-release\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-k8s-cni-cncf-io\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230909 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-os-release\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-systemd-units\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-k8s-cni-cncf-io\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-systemd-units\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.230989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-cnibin\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr66j\" (UniqueName: \"kubernetes.io/projected/1b855b1c-b9bc-4249-80a9-87108585857f-kube-api-access-sr66j\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-cnibin\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-cni-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-cni-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-netns\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231473 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-netns\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-os-release\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-socket-dir-parent\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231612 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-os-release\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231688 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l54d2\" (UniqueName: \"kubernetes.io/projected/19dac4e8-493c-456c-b8ea-cc1e48b9867c-kube-api-access-l54d2\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-netd\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19dac4e8-493c-456c-b8ea-cc1e48b9867c-cni-binary-copy\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.232886 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-daemon-config\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231945 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-netd\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.232807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19dac4e8-493c-456c-b8ea-cc1e48b9867c-cni-binary-copy\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-daemon-config\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233514 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-etc-kubernetes\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.231760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-multus-socket-dir-parent\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-systemd\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233642 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-etc-kubernetes\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-systemd\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-slash\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-ovn\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233771 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-slash\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-config\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.233877 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-ovn\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.234412 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-config\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.234472 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-script-lib\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.234526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-kubelet\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.234550 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-var-lib-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ab8742a-625e-4bb8-9329-31f39a34fe48-rootfs\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-script-lib\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-var-lib-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ab8742a-625e-4bb8-9329-31f39a34fe48-rootfs\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab8742a-625e-4bb8-9329-31f39a34fe48-proxy-tls\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-kubelet\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krnc2\" (UniqueName: \"kubernetes.io/projected/2ab8742a-625e-4bb8-9329-31f39a34fe48-kube-api-access-krnc2\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235815 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-kubelet\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-cnibin\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-cni-bin\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-cni-multus\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.235994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-system-cni-dir\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236020 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-system-cni-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-env-overrides\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-netns\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236117 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-log-socket\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-cni-multus\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236139 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovn-node-metrics-cert\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b855b1c-b9bc-4249-80a9-87108585857f-cni-binary-copy\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-kubelet\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-openvswitch\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1b855b1c-b9bc-4249-80a9-87108585857f-system-cni-dir\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-cnibin\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-system-cni-dir\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236399 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-run-netns\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236423 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/19dac4e8-493c-456c-b8ea-cc1e48b9867c-host-var-lib-cni-bin\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-log-socket\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.236552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1b855b1c-b9bc-4249-80a9-87108585857f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.237082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-env-overrides\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.237360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1b855b1c-b9bc-4249-80a9-87108585857f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.237797 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1b855b1c-b9bc-4249-80a9-87108585857f-cni-binary-copy\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.241531 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab8742a-625e-4bb8-9329-31f39a34fe48-proxy-tls\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.248912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovn-node-metrics-cert\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.249513 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.263904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l54d2\" (UniqueName: \"kubernetes.io/projected/19dac4e8-493c-456c-b8ea-cc1e48b9867c-kube-api-access-l54d2\") pod \"multus-fjdj6\" (UID: \"19dac4e8-493c-456c-b8ea-cc1e48b9867c\") " pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.265154 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wl6f\" (UniqueName: \"kubernetes.io/projected/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-kube-api-access-9wl6f\") pod \"ovnkube-node-jtntj\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.265331 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr66j\" (UniqueName: \"kubernetes.io/projected/1b855b1c-b9bc-4249-80a9-87108585857f-kube-api-access-sr66j\") pod \"multus-additional-cni-plugins-q8fqr\" (UID: \"1b855b1c-b9bc-4249-80a9-87108585857f\") " pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.266559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krnc2\" (UniqueName: \"kubernetes.io/projected/2ab8742a-625e-4bb8-9329-31f39a34fe48-kube-api-access-krnc2\") pod \"machine-config-daemon-vtgkn\" (UID: \"2ab8742a-625e-4bb8-9329-31f39a34fe48\") " pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.284149 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.322531 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.341666 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.352862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fjdj6" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.375299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.377312 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.400601 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:56:53 crc kubenswrapper[4858]: W1205 13:56:53.406096 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19dac4e8_493c_456c_b8ea_cc1e48b9867c.slice/crio-3dd24e5181fdbfc082ca7fd8bce87238f37950336c1511451de9f736679b9bb6 WatchSource:0}: Error finding container 3dd24e5181fdbfc082ca7fd8bce87238f37950336c1511451de9f736679b9bb6: Status 404 returned error can't find the container with id 3dd24e5181fdbfc082ca7fd8bce87238f37950336c1511451de9f736679b9bb6 Dec 05 13:56:53 crc kubenswrapper[4858]: W1205 13:56:53.410341 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b855b1c_b9bc_4249_80a9_87108585857f.slice/crio-8b219068bf276ec7d17c83444d68a66c5dd3810589524b7463efea69b4e4c93a WatchSource:0}: Error finding container 8b219068bf276ec7d17c83444d68a66c5dd3810589524b7463efea69b4e4c93a: Status 404 returned error can't find the container with id 8b219068bf276ec7d17c83444d68a66c5dd3810589524b7463efea69b4e4c93a Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.438063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.438183 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438211 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:56:55.438176211 +0000 UTC m=+23.985774350 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.438236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.438280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.438298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438314 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438332 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438345 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438371 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438384 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:55.438374026 +0000 UTC m=+23.985972165 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438401 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:55.438392727 +0000 UTC m=+23.985990866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438448 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438483 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438491 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438454 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438528 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:55.4385214 +0000 UTC m=+23.986119539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.438581 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:55.438553251 +0000 UTC m=+23.986151440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.486036 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.576223 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.617900 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.649185 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.676683 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.714748 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.728481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.743173 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.767801 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.787516 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.808767 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.830572 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.851676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.865740 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.879535 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.896405 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:53Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.900445 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.900545 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.900806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.900876 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:56:53 crc kubenswrapper[4858]: I1205 13:56:53.900910 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:53 crc kubenswrapper[4858]: E1205 13:56:53.900949 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.083309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d85q7" event={"ID":"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0","Type":"ContainerStarted","Data":"c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.084976 4858 generic.go:334] "Generic (PLEG): container finished" podID="1b855b1c-b9bc-4249-80a9-87108585857f" containerID="58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285" exitCode=0 Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.085027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerDied","Data":"58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.085042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerStarted","Data":"8b219068bf276ec7d17c83444d68a66c5dd3810589524b7463efea69b4e4c93a"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.086397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-87w6x" event={"ID":"9a69d20a-c80f-4814-9cf2-fce9ade638c5","Type":"ContainerStarted","Data":"c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.087765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerStarted","Data":"c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.087793 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerStarted","Data":"3dd24e5181fdbfc082ca7fd8bce87238f37950336c1511451de9f736679b9bb6"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.089318 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f" exitCode=0 Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.089374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.089391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"1f4a3222d09201c6993589c29f235f50b4fb2e65ce3bcb82040308b4d801ddd8"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.091620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.091657 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.091672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"1cb268f717299fcdd639a355a90abd4852aeb732622ccb1db9e64fb260260b14"} Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.101777 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.119561 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.142424 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.155326 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.186493 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.204796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.223111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.233753 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.253891 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.265075 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.281676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.296439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.350953 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.383134 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.396283 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.423645 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.433807 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.440392 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.477934 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.502743 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.585909 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.602375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.644687 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.660864 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.684441 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.699143 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.733935 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.762963 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.779649 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.791719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.805418 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.823507 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.846697 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.862330 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.908906 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.952294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:54 crc kubenswrapper[4858]: I1205 13:56:54.982654 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:54Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.019860 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.066360 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.096590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.096628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.096637 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.096645 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.096653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.098147 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.100643 4858 generic.go:334] "Generic (PLEG): container finished" podID="1b855b1c-b9bc-4249-80a9-87108585857f" containerID="cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd" exitCode=0 Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.100788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerDied","Data":"cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd"} Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.110653 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.138401 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.188021 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.222023 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.266445 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.302541 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.341886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.380111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.417059 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.457685 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.460027 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.460133 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460169 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:56:59.460143532 +0000 UTC m=+28.007741671 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.460203 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.460244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460281 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460308 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460326 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460336 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460397 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:59.460366998 +0000 UTC m=+28.007965137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460420 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:59.460410649 +0000 UTC m=+28.008008788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460427 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460466 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:59.46045336 +0000 UTC m=+28.008051499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460614 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460639 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.460657 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.460284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.461018 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:56:59.460685466 +0000 UTC m=+28.008283605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.500270 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.520167 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.523944 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.538172 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.577258 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.618057 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.665152 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.701270 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.737671 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.780315 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.824237 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.858903 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.899081 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.899105 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.899082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.899231 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.899333 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:56:55 crc kubenswrapper[4858]: E1205 13:56:55.899465 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.901592 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.938056 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:55 crc kubenswrapper[4858]: I1205 13:56:55.977066 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:55Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.018367 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.067470 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.099006 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.105909 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815"} Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.107523 4858 generic.go:334] "Generic (PLEG): container finished" podID="1b855b1c-b9bc-4249-80a9-87108585857f" containerID="4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97" exitCode=0 Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.107550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerDied","Data":"4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97"} Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.138912 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.181519 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.225458 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.267784 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.326359 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.341498 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.383254 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.419628 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.461895 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.499492 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.538176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.608238 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.619085 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.660689 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.710903 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.738552 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.777299 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.819482 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.864367 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.898587 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.938176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:56 crc kubenswrapper[4858]: I1205 13:56:56.980627 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:56Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.026480 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.059579 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.099098 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.112908 4858 generic.go:334] "Generic (PLEG): container finished" podID="1b855b1c-b9bc-4249-80a9-87108585857f" containerID="eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d" exitCode=0 Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.112947 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerDied","Data":"eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.143694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.179452 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.222317 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.245483 4858 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.250935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.250969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.250977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.251058 4858 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.263857 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.313549 4858 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.313772 4858 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.314710 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.314757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.314767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.314781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.314790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.333925 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.337192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.337239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.337251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.337273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.337285 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.350347 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.351748 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.354026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.354055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.354063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.354078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.354088 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.373081 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.376432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.376479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.376493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.376515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.376527 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.381234 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.388997 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.393759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.393805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.393816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.393864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.393875 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.405067 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.405234 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.406808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.406872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.406887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.406906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.406919 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.420869 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.463707 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.509752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.509802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.509818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.509858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.509870 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.518616 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.540315 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.581377 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.611784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.611849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.611859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.611879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.611889 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.619004 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.657037 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.697426 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.714015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.714049 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.714057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.714069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.714079 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.739875 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.778231 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:57Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.815774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.815854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.815863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.815876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.815885 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.900250 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.900346 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.900622 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.900669 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.900705 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:57 crc kubenswrapper[4858]: E1205 13:56:57.900748 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.918292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.918336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.918346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.918370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:57 crc kubenswrapper[4858]: I1205 13:56:57.918381 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:57Z","lastTransitionTime":"2025-12-05T13:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.020642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.020677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.020714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.020736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.020748 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.117703 4858 generic.go:334] "Generic (PLEG): container finished" podID="1b855b1c-b9bc-4249-80a9-87108585857f" containerID="ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab" exitCode=0 Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.117777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerDied","Data":"ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.122530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.122568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.122579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.122596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.122606 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.123158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.141705 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.155111 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.172157 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.190692 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.204278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.214371 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.224592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.224629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.224638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.224651 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.224691 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.226439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.238017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.250720 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.263571 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.278278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.290454 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.298953 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.327286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.327326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.327334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.327348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.327359 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.339376 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.385372 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:58Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.429416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.429452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.429461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.429475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.429485 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.531097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.531134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.531143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.531159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.531169 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.633231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.633271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.633280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.633292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.633301 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.735818 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.735868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.735877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.735893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.735904 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.838670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.838712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.838723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.838742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.838754 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.941601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.941640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.941649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.941664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:58 crc kubenswrapper[4858]: I1205 13:56:58.941675 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:58Z","lastTransitionTime":"2025-12-05T13:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.044181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.044231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.044243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.044280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.044290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.131767 4858 generic.go:334] "Generic (PLEG): container finished" podID="1b855b1c-b9bc-4249-80a9-87108585857f" containerID="f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601" exitCode=0 Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.131805 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerDied","Data":"f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.147152 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.147433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.147445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.147463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.147473 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.147674 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.160378 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.170744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.180535 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.200762 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.214893 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.224645 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.238288 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.249777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.249814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.249842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.249857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.249870 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.255156 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.269497 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.287876 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.298073 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.316278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.327756 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.338613 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:56:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.352053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.352101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.352109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.352121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.352129 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.454498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.454530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.454539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.454552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.454561 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.497989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.498089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498123 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:57:07.498104259 +0000 UTC m=+36.045702398 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.498144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.498168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498177 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.498187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498216 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:07.498206821 +0000 UTC m=+36.045804960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498303 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498315 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498313 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498351 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:07.498342395 +0000 UTC m=+36.045940534 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498325 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498361 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498368 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498375 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498377 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:07.498372176 +0000 UTC m=+36.045970315 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.498396 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:07.498389406 +0000 UTC m=+36.045987545 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.556393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.556422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.556432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.556446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.556455 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.659060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.659087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.659096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.659108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.659117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.763318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.763644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.763657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.763670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.763678 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.866439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.866476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.866487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.866503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.866513 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.898962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.899075 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.898962 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.899195 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.899119 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:56:59 crc kubenswrapper[4858]: E1205 13:56:59.899276 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.969109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.969155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.969166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.969182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:56:59 crc kubenswrapper[4858]: I1205 13:56:59.969195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:56:59Z","lastTransitionTime":"2025-12-05T13:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.071116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.071150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.071162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.071178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.071190 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.146682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.174355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.174387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.174396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.174410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.174421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.276993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.277032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.277043 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.277057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.277067 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.379691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.379723 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.379734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.379752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.379766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.483613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.483641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.483649 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.483665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.483674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.586207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.586262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.586273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.586291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.586303 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.689058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.689099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.689108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.689131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.689142 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.791559 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.791593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.791602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.791614 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.791623 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.894162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.894228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.894241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.894258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.894268 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.996655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.996702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.996711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.996732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:00 crc kubenswrapper[4858]: I1205 13:57:00.996744 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:00Z","lastTransitionTime":"2025-12-05T13:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.099443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.099758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.099773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.099794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.099805 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.153122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" event={"ID":"1b855b1c-b9bc-4249-80a9-87108585857f","Type":"ContainerStarted","Data":"a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.153398 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.153458 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.169923 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.176839 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.176893 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.184424 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.196314 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.202492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.202528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.202537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.202554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.202565 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.211064 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.228070 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.239023 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.250798 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.262983 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.273508 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.282200 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.292245 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.304326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.304370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.304380 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.304394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.304404 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.312604 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.328131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.340628 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.355526 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.371344 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.388555 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.404038 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.406493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.406700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.406784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.406913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.407000 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.426674 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.442886 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.457631 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.478171 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.497926 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.509487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.509687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.509782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.509929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.510033 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.511679 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.525527 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.539432 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.552301 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.566375 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.582737 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.597114 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.612768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.612800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.612809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.612840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.612852 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.715075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.715112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.715123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.715137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.715148 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.817383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.817420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.817432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.817446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.817457 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.898599 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:01 crc kubenswrapper[4858]: E1205 13:57:01.898985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.899095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:01 crc kubenswrapper[4858]: E1205 13:57:01.899233 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.899683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:01 crc kubenswrapper[4858]: E1205 13:57:01.900066 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.919106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.920382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.920407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.920414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.920428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.920437 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:01Z","lastTransitionTime":"2025-12-05T13:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.932248 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.943561 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.953629 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.973198 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.984951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:01 crc kubenswrapper[4858]: I1205 13:57:01.995529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.011272 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.022320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.022363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.022374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.022421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.022433 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.037548 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.052708 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.066947 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.078313 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.093157 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.107666 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.119902 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.125631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.125667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.125677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.125692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.125703 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.156151 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.227197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.227243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.227255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.227269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.227280 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.329381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.329407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.329415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.329429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.329438 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.431213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.431507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.431607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.431800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.431966 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.533862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.533896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.533907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.533945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.533955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.636558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.636600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.636610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.636626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.636635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.738626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.738677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.738687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.738701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.738714 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.841509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.841552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.841566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.841585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.841597 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.943576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.943609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.943619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.943633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:02 crc kubenswrapper[4858]: I1205 13:57:02.943643 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:02Z","lastTransitionTime":"2025-12-05T13:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.048425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.048485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.048502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.048523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.048535 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.151124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.151172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.151183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.151202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.151215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.158096 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.254138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.254196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.254210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.254227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.254238 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.268912 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.356816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.356898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.356916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.356933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.356944 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.459703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.459738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.459749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.459767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.459779 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.561650 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.561720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.561730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.561743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.561752 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.663876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.663911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.663922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.663933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.663942 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.766948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.767002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.767014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.767036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.767052 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.871293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.871329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.871340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.871355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.871366 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.899003 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:03 crc kubenswrapper[4858]: E1205 13:57:03.899121 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.899009 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.899000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:03 crc kubenswrapper[4858]: E1205 13:57:03.901267 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:03 crc kubenswrapper[4858]: E1205 13:57:03.899677 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.974294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.974329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.974338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.974352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:03 crc kubenswrapper[4858]: I1205 13:57:03.974362 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:03Z","lastTransitionTime":"2025-12-05T13:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.077016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.077083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.077094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.077111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.077123 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.164386 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/0.log" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.168057 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd" exitCode=1 Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.168108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.168932 4858 scope.go:117] "RemoveContainer" containerID="e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.179698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.179735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.179743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.179759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.179769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.188531 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.201636 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.216582 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.233640 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.245509 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.257205 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.276240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.282724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.282763 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.282774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.282791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.282803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.288910 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.297989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.309278 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.321434 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.332518 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.343529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.359967 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.376031 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:04Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.385545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.385571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.385583 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.385601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.385611 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.488307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.488610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.488622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.488639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.488650 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.591654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.591692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.591702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.591724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.591735 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.693672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.693707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.693718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.693733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.693746 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.796229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.796255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.796263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.796276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.796286 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.897882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.897906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.897913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.897937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:04 crc kubenswrapper[4858]: I1205 13:57:04.897946 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:04Z","lastTransitionTime":"2025-12-05T13:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.000097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.000125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.000134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.000147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.000156 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.102631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.102659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.102668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.102682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.102692 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.173013 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/0.log" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.179810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.180256 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.202681 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.204643 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.204678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.204689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.204704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.204716 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.226757 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.242455 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.255444 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.275494 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.287151 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.297393 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.306384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.306428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.306437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.306452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.306466 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.310763 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.325181 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.339600 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.351419 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.367647 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.388530 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.400457 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.408496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.408535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.408546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.408565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.408578 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.420299 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.511394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.511436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.511445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.511458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.511468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.614404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.614441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.614457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.614472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.614485 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.716807 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.716853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.716860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.716874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.716883 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.819451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.819503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.819514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.819530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.819543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.898572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.898594 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:05 crc kubenswrapper[4858]: E1205 13:57:05.898677 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:05 crc kubenswrapper[4858]: E1205 13:57:05.898779 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.898572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:05 crc kubenswrapper[4858]: E1205 13:57:05.898896 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.921278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.921309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.921319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.921331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.921359 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:05Z","lastTransitionTime":"2025-12-05T13:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.966423 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh"] Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.966864 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.968290 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.968814 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.977131 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:05 crc kubenswrapper[4858]: I1205 13:57:05.987724 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:05Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.001729 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.012772 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.022639 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.023089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.023140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.023149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.023162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.023172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.033791 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.043390 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.053514 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.060421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9vh\" (UniqueName: \"kubernetes.io/projected/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-kube-api-access-pl9vh\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.060462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.060501 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.060602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.061984 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.072116 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.087548 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.099023 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.108735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.120796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.124940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.124969 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.124978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.125008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.125021 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.137176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.151870 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:06Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.161301 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.161397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl9vh\" (UniqueName: \"kubernetes.io/projected/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-kube-api-access-pl9vh\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.161455 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.161501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.162011 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.162054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.169372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.182317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl9vh\" (UniqueName: \"kubernetes.io/projected/a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804-kube-api-access-pl9vh\") pod \"ovnkube-control-plane-749d76644c-pkkmh\" (UID: \"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.227732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.227770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.227778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.227791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.227800 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.278792 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" Dec 05 13:57:06 crc kubenswrapper[4858]: W1205 13:57:06.289568 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1aba3b1_5c58_4ce7_b3b3_d4fd0d940804.slice/crio-ecff0fe792e5ffbdd52d0aaa08a48860dc844a983a2b40718054082863a1500f WatchSource:0}: Error finding container ecff0fe792e5ffbdd52d0aaa08a48860dc844a983a2b40718054082863a1500f: Status 404 returned error can't find the container with id ecff0fe792e5ffbdd52d0aaa08a48860dc844a983a2b40718054082863a1500f Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.330385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.330597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.330669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.330746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.330886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.433479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.433562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.433574 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.433587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.433596 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.536058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.536094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.536103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.536115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.536125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.638856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.638888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.638898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.638912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.638924 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.741314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.741349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.741359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.741374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.741385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.843995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.844026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.844035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.844047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.844055 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.945626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.945661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.945671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.945687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:06 crc kubenswrapper[4858]: I1205 13:57:06.945697 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:06Z","lastTransitionTime":"2025-12-05T13:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.048143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.048179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.048189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.048205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.048216 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.150608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.150645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.150655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.150666 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.150674 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.187674 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" event={"ID":"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804","Type":"ContainerStarted","Data":"6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.187720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" event={"ID":"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804","Type":"ContainerStarted","Data":"b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.187732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" event={"ID":"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804","Type":"ContainerStarted","Data":"ecff0fe792e5ffbdd52d0aaa08a48860dc844a983a2b40718054082863a1500f"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.190623 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/1.log" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.191301 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/0.log" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.199287 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7" exitCode=1 Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.199325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.199409 4858 scope.go:117] "RemoveContainer" containerID="e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.200042 4858 scope.go:117] "RemoveContainer" containerID="3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.200183 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.214100 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.223752 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.231474 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.241507 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.251442 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.252876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.252905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.252916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.252932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.252943 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.262690 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.271957 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.284287 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.304232 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.318233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.329662 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.339978 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.354932 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.355730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.355765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.355777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.355793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.355803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.368637 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.379898 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.390924 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.402004 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.413904 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.424467 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.436461 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5jh87"] Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.437154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.437306 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.438472 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.458374 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.458443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.458457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.458473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.458482 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.460126 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.478725 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.479118 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.482478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.482540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.482553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.482569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.482580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.493113 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.500904 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.504629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.504709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.504744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.504762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.504780 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.512128 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.519499 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.522258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.522369 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.522433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.522513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.522574 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.526500 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.532811 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.536282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.536435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.536495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.536553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.536607 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.538766 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.547651 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.547759 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.549510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.549540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.549551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.549567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.549578 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.550462 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.563061 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573750 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573864 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8dw\" (UniqueName: \"kubernetes.io/projected/6197c8ee-275b-44dd-b402-e4b8039c4997-kube-api-access-mb8dw\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573933 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.573989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574072 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574128 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:23.574109546 +0000 UTC m=+52.121707685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574430 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:57:23.574421394 +0000 UTC m=+52.122019533 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574507 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574518 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574528 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574549 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:23.574543297 +0000 UTC m=+52.122141436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574584 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574604 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:23.574598529 +0000 UTC m=+52.122196668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574642 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574649 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574656 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.574673 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:23.57466814 +0000 UTC m=+52.122266279 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.582176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.593814 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.602471 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.614165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.626571 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.641251 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.652891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.652938 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.652947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.652995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.653008 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.654182 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.666523 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.674615 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.674941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8dw\" (UniqueName: \"kubernetes.io/projected/6197c8ee-275b-44dd-b402-e4b8039c4997-kube-api-access-mb8dw\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.674763 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.677032 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:08.177005516 +0000 UTC m=+36.724603745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.681022 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.692240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.700807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8dw\" (UniqueName: \"kubernetes.io/projected/6197c8ee-275b-44dd-b402-e4b8039c4997-kube-api-access-mb8dw\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.704774 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.715436 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.729165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.746684 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.755662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.755692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.755702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.755720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.755732 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.759222 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.769410 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.780693 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.797128 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.806758 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.817565 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.828274 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:07Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.872162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.872197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.872205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.872218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.872226 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.898721 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.898787 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.898842 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.898863 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.898959 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:07 crc kubenswrapper[4858]: E1205 13:57:07.899052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.975139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.975202 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.975214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.975232 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:07 crc kubenswrapper[4858]: I1205 13:57:07.975251 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:07Z","lastTransitionTime":"2025-12-05T13:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.077654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.077699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.077713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.077729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.077740 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.179715 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.179908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.179931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.179942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.179955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.179965 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: E1205 13:57:08.179914 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:08 crc kubenswrapper[4858]: E1205 13:57:08.180033 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:09.180015132 +0000 UTC m=+37.727613271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.203998 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/1.log" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.282377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.282408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.282415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.282429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.282440 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.384254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.384287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.384296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.384309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.384321 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.486245 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.486461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.486542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.486612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.486694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.589210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.589248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.589256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.589269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.589278 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.691306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.691339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.691346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.691359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.691368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.793579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.793613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.793621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.793633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.793642 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.895656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.895711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.895722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.895735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.895744 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.898905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:08 crc kubenswrapper[4858]: E1205 13:57:08.899014 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.997806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.997856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.997863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.997877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:08 crc kubenswrapper[4858]: I1205 13:57:08.997887 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:08Z","lastTransitionTime":"2025-12-05T13:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.100169 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.100216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.100229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.100247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.100260 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.190680 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:09 crc kubenswrapper[4858]: E1205 13:57:09.190796 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:09 crc kubenswrapper[4858]: E1205 13:57:09.190875 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:11.190860852 +0000 UTC m=+39.738458991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.202480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.202525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.202536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.202553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.202564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.304648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.304697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.304712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.304729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.304741 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.407362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.407399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.407410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.407426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.407437 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.509423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.509456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.509464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.509475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.509484 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.611865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.611952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.611968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.611984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.611994 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.714365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.714399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.714409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.714427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.714437 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.816656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.816696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.816706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.816720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.816731 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.898583 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.898658 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:09 crc kubenswrapper[4858]: E1205 13:57:09.898706 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.898749 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:09 crc kubenswrapper[4858]: E1205 13:57:09.898786 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:09 crc kubenswrapper[4858]: E1205 13:57:09.898893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.919186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.919216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.919224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.919238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:09 crc kubenswrapper[4858]: I1205 13:57:09.919246 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:09Z","lastTransitionTime":"2025-12-05T13:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.021139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.021168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.021177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.021188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.021213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.123221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.123281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.123299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.123323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.123343 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.225339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.225381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.225393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.225408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.225423 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.329093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.329130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.329140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.329162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.329173 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.434350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.434402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.434411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.434425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.434451 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.536683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.536748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.536759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.536773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.536782 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.639284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.639314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.639321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.639333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.639341 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.682216 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.695144 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.704787 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.728658 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.741718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.741754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.741762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.741775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.741787 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.765240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.779326 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.788809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.801142 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.817773 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.828439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.840500 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.843394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.843424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.843435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.843450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.843460 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.849865 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.859539 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.869843 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.878992 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.889385 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.898429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:10 crc kubenswrapper[4858]: E1205 13:57:10.898535 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.903600 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.916879 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:10Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.945735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.945790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.945799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.945813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:10 crc kubenswrapper[4858]: I1205 13:57:10.945844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:10Z","lastTransitionTime":"2025-12-05T13:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.047731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.047765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.047813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.047870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.047886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.150258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.150299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.150309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.150324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.150336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.209297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:11 crc kubenswrapper[4858]: E1205 13:57:11.209440 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:11 crc kubenswrapper[4858]: E1205 13:57:11.209491 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:15.209478202 +0000 UTC m=+43.757076341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.252946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.252975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.252986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.253001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.253012 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.355415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.355449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.355457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.355469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.355479 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.457787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.457849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.457861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.457875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.457886 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.560225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.560252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.560260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.560272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.560282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.662648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.662678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.662687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.662700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.662712 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.765154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.765228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.765242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.765256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.765290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.868502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.868534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.868542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.868556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.868566 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.899029 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.899039 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.899060 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:11 crc kubenswrapper[4858]: E1205 13:57:11.899277 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:11 crc kubenswrapper[4858]: E1205 13:57:11.899419 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:11 crc kubenswrapper[4858]: E1205 13:57:11.899557 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.913445 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.928337 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.940003 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.958067 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.972122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.972178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.972194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.972217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.972234 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:11Z","lastTransitionTime":"2025-12-05T13:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.974395 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:11 crc kubenswrapper[4858]: I1205 13:57:11.985735 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.007719 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.027779 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.039418 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.049936 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.073979 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.075157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.075187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.075198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.075213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.075224 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.087809 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.100233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.114311 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.127259 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.144586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.155061 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.177923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.178188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.178264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.178340 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.178402 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.282577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.282638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.282712 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.282738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.282758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.385509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.385563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.385572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.385584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.385593 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.487935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.487971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.487985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.488062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.488077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.590580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.590624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.590637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.590654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.590666 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.692975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.693012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.693020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.693036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.693045 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.795450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.795484 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.795494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.795508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.795517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.897903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.897942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.897953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.897970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.897984 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:12Z","lastTransitionTime":"2025-12-05T13:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:12 crc kubenswrapper[4858]: I1205 13:57:12.898184 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:12 crc kubenswrapper[4858]: E1205 13:57:12.898309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.000951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.000984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.000991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.001007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.001017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.103721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.103751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.103759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.103771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.103780 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.205959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.205994 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.206002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.206016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.206025 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.308274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.308344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.308355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.308372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.308384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.410239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.410271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.410281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.410295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.410307 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.512256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.512297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.512308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.512324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.512336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.615222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.615305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.615319 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.615370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.615384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.717411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.717473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.717483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.717497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.717506 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.820061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.820089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.820097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.820109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.820117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.899240 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.899392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.899460 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:13 crc kubenswrapper[4858]: E1205 13:57:13.900091 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:13 crc kubenswrapper[4858]: E1205 13:57:13.900209 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:13 crc kubenswrapper[4858]: E1205 13:57:13.900336 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.922476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.922551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.922565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.922601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:13 crc kubenswrapper[4858]: I1205 13:57:13.922614 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:13Z","lastTransitionTime":"2025-12-05T13:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.024704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.024743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.024751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.024766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.024776 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.127001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.127042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.127067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.127082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.127091 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.229110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.229146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.229155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.229172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.229182 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.332092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.332147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.332158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.332174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.332184 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.434282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.434314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.434323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.434338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.434348 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.537569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.537600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.537608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.537621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.537630 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.640174 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.640207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.640217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.640231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.640242 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.742419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.742470 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.742479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.742492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.742504 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.848285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.848345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.848362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.848383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.848406 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.899469 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:14 crc kubenswrapper[4858]: E1205 13:57:14.899626 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.951072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.951104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.951113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.951126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:14 crc kubenswrapper[4858]: I1205 13:57:14.951134 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:14Z","lastTransitionTime":"2025-12-05T13:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.053857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.053889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.053900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.053916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.053927 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.156145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.156170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.156179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.156192 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.156201 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.258104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.258145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.258156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.258172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.258183 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.277144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:15 crc kubenswrapper[4858]: E1205 13:57:15.277284 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:15 crc kubenswrapper[4858]: E1205 13:57:15.277338 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:23.277321289 +0000 UTC m=+51.824919428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.360935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.360967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.360986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.361001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.361010 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.463625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.463657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.463667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.463682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.463691 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.566357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.566394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.566404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.566417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.566427 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.668389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.668418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.668426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.668438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.668449 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.771384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.771433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.771450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.771466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.771477 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.874056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.874313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.874387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.874454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.874514 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.900932 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:15 crc kubenswrapper[4858]: E1205 13:57:15.901446 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.901103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.901690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:15 crc kubenswrapper[4858]: E1205 13:57:15.902959 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:15 crc kubenswrapper[4858]: E1205 13:57:15.903055 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.977547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.977779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.977870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.977953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:15 crc kubenswrapper[4858]: I1205 13:57:15.978023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:15Z","lastTransitionTime":"2025-12-05T13:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.080158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.080409 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.080498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.080595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.080679 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.183655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.183698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.183707 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.183722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.183733 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.286485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.286515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.286523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.286535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.286545 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.388880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.388915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.388924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.388936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.388944 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.490713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.490753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.490766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.490782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.490795 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.593205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.593286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.593296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.593309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.593321 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.695223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.695282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.695293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.695309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.695321 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.797735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.797800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.797809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.797845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.797856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.898628 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:16 crc kubenswrapper[4858]: E1205 13:57:16.898789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.900281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.900316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.900326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.900336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:16 crc kubenswrapper[4858]: I1205 13:57:16.900346 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:16Z","lastTransitionTime":"2025-12-05T13:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.002514 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.002588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.002603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.002624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.002656 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.106708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.106744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.106753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.106771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.106789 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.211537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.211609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.211629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.211656 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.211675 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.314545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.314581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.314590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.314604 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.314614 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.416751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.416789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.416800 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.416814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.416839 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.519809 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.519870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.519882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.519898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.519913 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.621966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.622008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.622019 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.622036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.622048 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.715109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.715148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.715160 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.715173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.715182 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.726333 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:17Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.730097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.730177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.730206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.730220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.730228 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.744032 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:17Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.747864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.747912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.747923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.747939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.747952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.762182 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:17Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.766133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.766170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.766179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.766193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.766203 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.781498 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:17Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.786848 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.786892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.786901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.786917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.786926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.800715 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:17Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.800959 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.803111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.803179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.803194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.803220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.803237 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.898690 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.898808 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.898872 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.899032 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.899253 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:17 crc kubenswrapper[4858]: E1205 13:57:17.899378 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.906676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.906737 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.906752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.906776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:17 crc kubenswrapper[4858]: I1205 13:57:17.906792 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:17Z","lastTransitionTime":"2025-12-05T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.010156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.010244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.010262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.010294 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.010306 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.113617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.113750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.113771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.113797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.113813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.216348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.216408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.216420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.216442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.216456 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.319480 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.319531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.319539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.319554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.319565 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.422628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.422675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.422683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.422698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.422707 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.524951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.524990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.524999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.525013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.525023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.627961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.628008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.628017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.628030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.628039 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.731060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.731136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.731156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.731184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.731204 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.833495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.833545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.833556 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.833575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.833591 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.899264 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:18 crc kubenswrapper[4858]: E1205 13:57:18.899456 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.936331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.936394 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.936408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.936424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:18 crc kubenswrapper[4858]: I1205 13:57:18.936434 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:18Z","lastTransitionTime":"2025-12-05T13:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.039782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.039896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.039919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.039953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.039978 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.142108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.142143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.142153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.142167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.142177 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.245466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.245553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.245582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.245619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.245647 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.349567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.349685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.349720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.349758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.349835 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.452873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.452945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.452962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.452988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.453006 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.555691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.555766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.555792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.555829 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.555884 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.658678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.658725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.658735 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.658752 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.658763 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.761444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.761486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.761498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.761512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.761521 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.864505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.864553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.864567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.864588 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.864600 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.899445 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.899516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:19 crc kubenswrapper[4858]: E1205 13:57:19.899611 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.899377 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:19 crc kubenswrapper[4858]: E1205 13:57:19.899871 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:19 crc kubenswrapper[4858]: E1205 13:57:19.900239 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.968535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.968572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.968579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.968591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:19 crc kubenswrapper[4858]: I1205 13:57:19.968601 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:19Z","lastTransitionTime":"2025-12-05T13:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.071457 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.071500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.071515 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.071531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.071542 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.164275 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.174206 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.177785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.177866 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.177877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.177891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.177902 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.178845 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.196689 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.218281 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.228608 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.242964 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.255623 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.268748 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.279839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.279880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.279890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.279906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.279917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.281443 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.299650 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.314339 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.328336 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.341218 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.352439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.365494 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.380000 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.382641 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.382702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.382713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.382753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.382766 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.402908 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.420236 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:20Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.485135 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.485200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.485212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.485247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.485260 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.587611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.587717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.587736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.587762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.587775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.691109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.691147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.691156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.691170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.691179 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.793612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.793855 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.793954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.794025 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.794094 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.896205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.896237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.896248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.896262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.896273 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.898847 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:20 crc kubenswrapper[4858]: E1205 13:57:20.898984 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.998557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.998590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.998600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.998617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:20 crc kubenswrapper[4858]: I1205 13:57:20.998650 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:20Z","lastTransitionTime":"2025-12-05T13:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.102478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.102522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.102532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.102586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.102616 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.205963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.206082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.206103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.206164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.206187 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.308777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.308814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.308823 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.308859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.308874 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.411608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.411655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.411672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.411687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.411698 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.514489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.514572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.514595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.514623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.514647 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.617341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.617387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.617396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.617411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.617421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.720082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.720131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.720140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.720159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.720171 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.824184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.824237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.824246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.824267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.824279 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.899471 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.899467 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:21 crc kubenswrapper[4858]: E1205 13:57:21.899652 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:21 crc kubenswrapper[4858]: E1205 13:57:21.899768 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.899358 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:21 crc kubenswrapper[4858]: E1205 13:57:21.900166 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.900534 4858 scope.go:117] "RemoveContainer" containerID="3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.922293 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:21Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.926431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.926611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.926760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.926909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.927051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:21Z","lastTransitionTime":"2025-12-05T13:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.940193 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:21Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.954096 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:21Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.969453 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:21Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:21 crc kubenswrapper[4858]: I1205 13:57:21.990060 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:21Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.004694 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.017723 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.030721 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.031726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.031774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.031788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.031808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.031824 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.049085 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1de1c323fb7662dc280f6f753d322dd5bad497bc7b828cfd689a2bd80b7bbbd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:03Z\\\",\\\"message\\\":\\\":03.750845 6048 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751056 6048 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1205 13:57:03.751587 6048 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:03.751604 6048 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:03.751627 6048 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:03.751675 6048 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:03.751693 6048 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 13:57:03.751699 6048 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 13:57:03.751714 6048 factory.go:656] Stopping watch factory\\\\nI1205 13:57:03.751738 6048 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 13:57:03.751746 6048 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:03.751754 6048 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:03.751761 6048 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:03.751770 6048 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 13:57:03.751773 6048 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.059085 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.069505 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.079901 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.092127 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.101883 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.115599 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.137212 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.138347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.138412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.138425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.138438 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.138470 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.148184 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.158731 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.168524 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.178693 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.194967 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.206909 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.220902 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.231517 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.241032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.241066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.241076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.241089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.241098 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.245982 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.257737 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/1.log" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.260450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.261116 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.267577 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.285591 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.301700 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.319388 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.342930 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.345190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.345293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.345362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.345443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.345548 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.359182 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.377710 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.397521 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.414406 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.432001 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.444616 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.449296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.449341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.449355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.449372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.449385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.459976 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.474209 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.487174 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.500392 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.513207 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.527181 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.541031 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.551884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.552033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.552094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.552155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.552213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.556253 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.575426 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.587718 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.599098 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.614798 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.636139 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.651631 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.654765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.654920 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.654992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.655067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.655159 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.669931 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.686267 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.702407 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.715129 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:22Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.757874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.757913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.757922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.757941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.757953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.860479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.860516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.860526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.860539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.860548 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.899024 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:22 crc kubenswrapper[4858]: E1205 13:57:22.899158 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.963036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.963078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.963087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.963102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:22 crc kubenswrapper[4858]: I1205 13:57:22.963111 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:22Z","lastTransitionTime":"2025-12-05T13:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.066542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.067061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.067218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.067312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.067391 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.169699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.169731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.169740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.169753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.169761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.265514 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/2.log" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.266246 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/1.log" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.268611 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90" exitCode=1 Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.268724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.268803 4858 scope.go:117] "RemoveContainer" containerID="3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.269515 4858 scope.go:117] "RemoveContainer" containerID="d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90" Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.269708 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.271912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.271950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.271959 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.271973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.271982 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.289481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.300504 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.314777 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.335989 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a0533df01b5bac1439f997f5c605a937724b2449be1934bb0127e021d9e93a7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:06Z\\\",\\\"message\\\":\\\"ient/pkg/client/informers/externalversions/factory.go:117\\\\nI1205 13:57:05.713944 6191 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1205 13:57:05.713978 6191 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1205 13:57:05.713985 6191 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1205 13:57:05.713995 6191 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 13:57:05.714000 6191 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 13:57:05.714016 6191 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 13:57:05.714036 6191 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 13:57:05.714046 6191 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1205 13:57:05.714051 6191 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 13:57:05.714062 6191 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1205 13:57:05.714062 6191 factory.go:656] Stopping watch factory\\\\nI1205 13:57:05.714077 6191 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1205 13:57:05.714075 6191 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1205 13:57:05.714084 6191 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 13:57:05.714092 6191 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.346688 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.356631 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.356993 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.357196 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:39.357151029 +0000 UTC m=+67.904749238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.359339 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.371141 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.373728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.373862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.373940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.374042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.374142 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.381726 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.393502 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.408097 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.420224 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.434149 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.449377 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.476542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.476601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.476616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.476635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.476648 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.479741 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.496814 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.517681 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.539432 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.561653 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:23Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.579685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.579751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.579764 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.579790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.579801 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.660349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.660513 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:57:55.660489148 +0000 UTC m=+84.208087287 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.661104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.661137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.661178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.661202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661278 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661299 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661312 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661279 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661377 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:55.661365631 +0000 UTC m=+84.208963770 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661430 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661444 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:55.661415302 +0000 UTC m=+84.209013441 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661532 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:55.661514985 +0000 UTC m=+84.209113124 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661340 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661561 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661575 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.661627 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:57:55.661613048 +0000 UTC m=+84.209211317 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.682678 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.682720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.682733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.682749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.682761 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.786551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.786990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.787003 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.787020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.787067 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.889549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.889595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.889607 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.889625 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.889636 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.898982 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.899025 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.899113 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.899190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.899278 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:23 crc kubenswrapper[4858]: E1205 13:57:23.899352 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.992498 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.992533 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.992540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.992554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:23 crc kubenswrapper[4858]: I1205 13:57:23.992570 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:23Z","lastTransitionTime":"2025-12-05T13:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.094774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.094814 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.094838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.094853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.094866 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.197153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.197183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.197191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.197204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.197213 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.274139 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/2.log" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.277006 4858 scope.go:117] "RemoveContainer" containerID="d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90" Dec 05 13:57:24 crc kubenswrapper[4858]: E1205 13:57:24.277175 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.287529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.299297 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.299333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.299344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.299359 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.299371 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.300268 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.311022 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.326715 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.340770 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.353434 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.365993 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.384137 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.395481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.401981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.402050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.402065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.402082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.402093 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.406146 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.416798 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.426437 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.435644 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.446231 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.456354 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.471176 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.487795 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.496728 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:24Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.504427 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.504474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.504483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.504502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.504511 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.607342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.607391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.607399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.607413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.607429 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.709919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.709947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.709955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.709966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.709975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.812096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.812133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.812146 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.812161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.812174 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.898706 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:24 crc kubenswrapper[4858]: E1205 13:57:24.898868 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.915054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.915107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.915120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.915138 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:24 crc kubenswrapper[4858]: I1205 13:57:24.915149 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:24Z","lastTransitionTime":"2025-12-05T13:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.017990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.018030 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.018041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.018058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.018071 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.120109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.120179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.120191 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.120205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.120216 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.222781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.222816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.222844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.222859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.222869 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.324874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.324912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.324924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.324939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.324953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.427536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.427602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.427610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.427622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.427635 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.530182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.530248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.530262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.530277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.530287 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.632475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.632532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.632544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.632557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.632566 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.734360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.734393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.734400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.734412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.734421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.836819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.836882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.836890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.836903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.836914 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.899270 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:25 crc kubenswrapper[4858]: E1205 13:57:25.899396 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.899283 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:25 crc kubenswrapper[4858]: E1205 13:57:25.899475 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.899270 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:25 crc kubenswrapper[4858]: E1205 13:57:25.899558 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.939553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.939591 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.939603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.939619 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:25 crc kubenswrapper[4858]: I1205 13:57:25.939630 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:25Z","lastTransitionTime":"2025-12-05T13:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.041775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.041808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.041816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.041851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.041862 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.144393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.144437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.144449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.144465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.144476 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.246496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.246548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.246560 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.246581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.246617 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.350243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.350305 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.350324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.350355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.350374 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.454061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.454137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.454155 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.454182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.454202 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.556927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.556971 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.556980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.556993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.557031 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.660055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.660113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.660131 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.660156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.660171 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.763863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.763940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.763964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.763993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.764014 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.866703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.866757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.866771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.866789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.866800 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.898709 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:26 crc kubenswrapper[4858]: E1205 13:57:26.898871 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.970207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.970264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.970274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.970295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:26 crc kubenswrapper[4858]: I1205 13:57:26.970309 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:26Z","lastTransitionTime":"2025-12-05T13:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.072881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.072951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.072966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.072986 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.073000 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.176612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.176688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.176703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.176733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.176749 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.279468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.279522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.279532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.279544 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.279570 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.383342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.383437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.383464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.383504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.383530 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.486785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.486881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.486894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.486918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.486938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.589896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.589950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.589984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.590004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.590017 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.693162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.693223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.693233 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.693248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.693259 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.796074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.796125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.796157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.796180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.796197 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.898299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:27 crc kubenswrapper[4858]: E1205 13:57:27.898476 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.898895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:27 crc kubenswrapper[4858]: E1205 13:57:27.898975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.899295 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:27 crc kubenswrapper[4858]: E1205 13:57:27.899427 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.899652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.899676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.899683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.899698 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:27 crc kubenswrapper[4858]: I1205 13:57:27.899709 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:27Z","lastTransitionTime":"2025-12-05T13:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.002242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.002280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.002289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.002303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.002313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.104493 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.104526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.104536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.104552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.104564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.142950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.142984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.142995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.143009 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.143020 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.153744 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:28Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.157277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.157304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.157338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.157352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.157361 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.168795 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:28Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.172269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.172298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.172306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.172318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.172348 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.185912 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:28Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.189539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.189580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.189592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.189610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.189621 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.201765 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:28Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.205311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.205373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.205385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.205403 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.205416 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.219186 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:28Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.219327 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.221184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.221213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.221241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.221257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.221267 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.323456 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.323483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.323491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.323505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.323515 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.425711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.425746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.425757 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.425771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.425781 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.528488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.528528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.528539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.528555 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.528567 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.630506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.630542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.630554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.630569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.630580 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.732547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.732590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.732603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.732618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.732628 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.835835 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.835885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.835896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.835923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.835933 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.898743 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:28 crc kubenswrapper[4858]: E1205 13:57:28.898916 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.938220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.938269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.938281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.938298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:28 crc kubenswrapper[4858]: I1205 13:57:28.938310 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:28Z","lastTransitionTime":"2025-12-05T13:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.042958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.043052 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.043071 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.043104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.043131 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.146266 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.146330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.146342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.146357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.146368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.249395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.249446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.249475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.249489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.249502 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.352775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.352853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.352870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.352889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.352902 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.456291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.456331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.456342 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.456357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.456368 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.559148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.559186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.559195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.559209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.559221 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.661349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.661387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.661397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.661414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.661431 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.763547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.763579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.763587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.763600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.763610 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.866185 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.866228 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.866239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.866254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.866264 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.898810 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.898814 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.898902 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:29 crc kubenswrapper[4858]: E1205 13:57:29.899004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:29 crc kubenswrapper[4858]: E1205 13:57:29.899151 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:29 crc kubenswrapper[4858]: E1205 13:57:29.899226 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.968856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.968890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.968897 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.968911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:29 crc kubenswrapper[4858]: I1205 13:57:29.968921 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:29Z","lastTransitionTime":"2025-12-05T13:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.072031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.072086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.072094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.072108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.072117 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.190086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.190122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.190132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.190147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.190158 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.292125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.292172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.292180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.292194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.292205 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.394485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.394527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.394537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.394554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.394565 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.497066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.497094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.497102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.497114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.497124 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.599510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.599548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.599558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.599571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.599586 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.702540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.702584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.702592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.702608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.702619 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.805291 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.805322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.805332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.805346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.805356 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.899073 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:30 crc kubenswrapper[4858]: E1205 13:57:30.899262 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.907354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.907389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.907400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.907415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:30 crc kubenswrapper[4858]: I1205 13:57:30.907426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:30Z","lastTransitionTime":"2025-12-05T13:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.010140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.010182 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.010195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.010212 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.010225 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.112975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.112998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.113008 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.113020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.113028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.215934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.215972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.215981 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.216006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.216019 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.318435 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.318676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.318749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.318834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.318903 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.421778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.421878 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.421891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.421906 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.421918 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.524538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.524599 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.524611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.524629 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.524642 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.631834 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.632069 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.632078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.632090 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.632099 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.735042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.735075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.735085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.735099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.735138 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.837478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.837526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.837536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.837548 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.837557 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.898246 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.898265 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.898387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:31 crc kubenswrapper[4858]: E1205 13:57:31.898517 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:31 crc kubenswrapper[4858]: E1205 13:57:31.898611 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:31 crc kubenswrapper[4858]: E1205 13:57:31.898927 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.910534 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.920448 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.931487 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.940233 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.940267 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.940277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.940293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.940305 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:31Z","lastTransitionTime":"2025-12-05T13:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.942225 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.955058 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.970934 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.980951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:31 crc kubenswrapper[4858]: I1205 13:57:31.996403 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:31Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.008690 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.019048 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.032035 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.043612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.044164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.044175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.044190 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.044202 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.045406 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.054753 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.065774 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.084017 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.093752 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.102316 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.114098 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:32Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.147310 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.147351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.147360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.147375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.147385 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.250325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.250366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.250377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.250391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.250401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.353275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.353341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.353357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.353398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.353413 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.456072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.456111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.456120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.456134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.456144 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.558756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.558806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.558817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.558853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.558867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.661311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.661412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.661426 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.661443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.661453 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.763838 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.764596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.764611 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.764627 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.764637 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.867525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.867561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.867598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.867615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.867626 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.898185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:32 crc kubenswrapper[4858]: E1205 13:57:32.898304 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.969624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.969661 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.969670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.969683 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:32 crc kubenswrapper[4858]: I1205 13:57:32.969693 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:32Z","lastTransitionTime":"2025-12-05T13:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.071584 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.071615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.071623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.071636 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.071645 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.173411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.173487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.173524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.173550 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.173563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.275186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.275246 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.275257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.275276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.275288 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.376922 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.376963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.376975 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.376990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.377001 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.478990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.479018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.479026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.479040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.479049 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.582714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.582766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.582782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.582803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.582816 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.684944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.685017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.685034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.685067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.685087 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.788208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.788253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.788263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.788279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.788290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.897275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.897565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.897673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.897756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.897841 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:33Z","lastTransitionTime":"2025-12-05T13:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.898239 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.898267 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:33 crc kubenswrapper[4858]: E1205 13:57:33.898365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:33 crc kubenswrapper[4858]: I1205 13:57:33.898382 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:33 crc kubenswrapper[4858]: E1205 13:57:33.898474 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:33 crc kubenswrapper[4858]: E1205 13:57:33.898570 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.000524 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.001125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.001238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.001336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.001435 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.103373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.103398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.103407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.103418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.103426 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.206306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.206603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.206612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.206628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.206638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.309070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.309116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.309127 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.309144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.309156 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.411644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.411685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.411694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.411708 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.411721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.514254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.514290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.514299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.514313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.514323 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.616673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.616704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.616713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.616726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.616734 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.719013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.719047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.719063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.719082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.719094 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.821379 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.821420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.821432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.821449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.821461 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.898230 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:34 crc kubenswrapper[4858]: E1205 13:57:34.898409 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.924046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.924091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.924104 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.924120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:34 crc kubenswrapper[4858]: I1205 13:57:34.924131 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:34Z","lastTransitionTime":"2025-12-05T13:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.026623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.026700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.026714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.026732 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.026745 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.129362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.129430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.129442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.129465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.129476 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.232222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.232258 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.232268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.232281 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.232289 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.334257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.334308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.334318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.334332 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.334342 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.436277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.436634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.436727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.436820 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.436934 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.539784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.539842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.539854 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.539870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.539881 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.642186 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.642221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.642229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.642243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.642252 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.745772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.746476 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.746771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.746978 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.747133 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.850070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.850108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.850116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.850136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.850145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.905608 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:35 crc kubenswrapper[4858]: E1205 13:57:35.905769 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.905879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:35 crc kubenswrapper[4858]: E1205 13:57:35.905938 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.905636 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:35 crc kubenswrapper[4858]: E1205 13:57:35.906552 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.952728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.952782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.952804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.952859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:35 crc kubenswrapper[4858]: I1205 13:57:35.952901 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:35Z","lastTransitionTime":"2025-12-05T13:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.055746 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.056300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.056644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.056739 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.056810 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.159088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.159360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.159423 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.159492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.159561 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.261869 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.261907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.261916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.261929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.261940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.364846 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.365114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.365230 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.365348 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.365473 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.467733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.468016 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.468123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.468208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.468295 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.570697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.570985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.571070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.571163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.571244 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.674244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.674471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.674568 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.674668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.674764 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.777086 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.777119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.777129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.777141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.777149 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.879688 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.879935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.880029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.880095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.880157 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.899119 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:36 crc kubenswrapper[4858]: E1205 13:57:36.899247 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.899882 4858 scope.go:117] "RemoveContainer" containerID="d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90" Dec 05 13:57:36 crc kubenswrapper[4858]: E1205 13:57:36.900090 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.982313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.982557 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.982640 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.982706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:36 crc kubenswrapper[4858]: I1205 13:57:36.982762 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:36Z","lastTransitionTime":"2025-12-05T13:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.084721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.084759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.084772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.084787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.084799 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.187706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.187743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.187751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.187768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.187784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.290259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.290290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.290298 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.290311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.290320 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.393526 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.393844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.393945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.394042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.394153 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.496620 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.496655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.496663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.496676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.496684 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.599092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.599150 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.599158 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.599173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.599185 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.701346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.701381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.701390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.701439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.701449 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.803804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.803895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.803904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.803919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.803942 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.898795 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:37 crc kubenswrapper[4858]: E1205 13:57:37.898948 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.899133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:37 crc kubenswrapper[4858]: E1205 13:57:37.899178 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.899279 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:37 crc kubenswrapper[4858]: E1205 13:57:37.899328 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.905375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.905399 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.905407 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.905420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:37 crc kubenswrapper[4858]: I1205 13:57:37.905430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:37Z","lastTransitionTime":"2025-12-05T13:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.008343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.008381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.008389 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.008404 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.008415 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.110436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.110474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.110486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.110500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.110512 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.212815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.212870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.212880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.212894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.212903 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.315018 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.315046 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.315056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.315068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.315077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.417552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.417859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.417926 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.417997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.418064 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.520791 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.520841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.520852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.520868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.520880 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.559088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.559335 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.559453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.559539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.559622 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.572644 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:38Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.575789 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.575950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.576041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.576128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.576220 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.589433 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:38Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.592934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.592955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.592962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.592974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.592982 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.603119 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:38Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.605709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.605989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.606000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.606013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.606024 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.616039 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:38Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.618944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.618987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.618997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.619012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.619024 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.628672 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:38Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.628804 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.629999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.630024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.630032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.630045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.630056 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.732261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.732299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.732309 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.732323 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.732334 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.834747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.834773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.834781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.834794 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.834803 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.898781 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:38 crc kubenswrapper[4858]: E1205 13:57:38.898941 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.936571 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.936609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.936618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.936633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:38 crc kubenswrapper[4858]: I1205 13:57:38.936643 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:38Z","lastTransitionTime":"2025-12-05T13:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.038847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.038908 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.038921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.038937 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.038948 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.141687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.141719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.141727 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.141740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.141749 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.245203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.245240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.245250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.245264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.245274 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.347714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.347753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.347765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.347781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.347791 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.428436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:39 crc kubenswrapper[4858]: E1205 13:57:39.428548 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:39 crc kubenswrapper[4858]: E1205 13:57:39.428598 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:58:11.428583641 +0000 UTC m=+99.976181780 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.450428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.450467 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.450477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.450494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.450506 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.553464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.553502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.553511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.553525 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.553536 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.655796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.655856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.655868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.655884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.655894 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.758259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.758313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.758325 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.758347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.758358 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.860676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.860711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.860742 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.860781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.860793 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.898483 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.898580 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:39 crc kubenswrapper[4858]: E1205 13:57:39.898621 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.898666 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:39 crc kubenswrapper[4858]: E1205 13:57:39.898742 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:39 crc kubenswrapper[4858]: E1205 13:57:39.898786 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.963244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.963278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.963288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.963307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:39 crc kubenswrapper[4858]: I1205 13:57:39.963318 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:39Z","lastTransitionTime":"2025-12-05T13:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.066621 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.066682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.066695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.066714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.066725 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.169600 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.169631 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.169644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.169658 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.169668 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.272539 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.272570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.272579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.272590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.272601 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.375195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.375225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.375235 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.375250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.375262 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.477181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.477213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.477223 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.477238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.477250 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.579363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.579395 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.579406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.579419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.579431 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.681756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.682595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.683032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.683578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.683925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.786894 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.787201 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.787301 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.787376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.787451 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.889673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.889715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.889726 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.889743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.889756 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.899191 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:40 crc kubenswrapper[4858]: E1205 13:57:40.899328 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.991802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.992123 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.992197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.992285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:40 crc kubenswrapper[4858]: I1205 13:57:40.992358 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:40Z","lastTransitionTime":"2025-12-05T13:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.095026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.095098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.095111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.095128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.095141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.197930 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.197965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.197974 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.197987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.197997 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.299532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.299569 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.299578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.299592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.299601 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.402034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.402072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.402082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.402097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.402109 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.504103 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.504161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.504171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.504184 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.504193 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.606422 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.606468 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.606477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.606492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.606501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.716378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.716442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.716455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.716475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.716492 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.818972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.818999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.819007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.819022 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.819031 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.899058 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.899109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.899075 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:41 crc kubenswrapper[4858]: E1205 13:57:41.899203 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:41 crc kubenswrapper[4858]: E1205 13:57:41.899274 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:41 crc kubenswrapper[4858]: E1205 13:57:41.899388 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.910857 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.921203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.921413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.921495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.921572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.921638 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:41Z","lastTransitionTime":"2025-12-05T13:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.923973 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.944443 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.956219 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.968563 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.980915 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:41 crc kubenswrapper[4858]: I1205 13:57:41.994204 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:41Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.008106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.019108 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.023711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.023745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.023755 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.023772 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.023784 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.028727 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.040450 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.054422 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.069541 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.080357 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.097734 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.109864 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.126440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.126488 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.126497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.126512 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.126522 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.129520 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.142138 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.227816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.227890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.227901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.227917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.227929 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.331111 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.331167 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.331183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.331205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.331223 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.333878 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/0.log" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.333935 4858 generic.go:334] "Generic (PLEG): container finished" podID="19dac4e8-493c-456c-b8ea-cc1e48b9867c" containerID="c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf" exitCode=1 Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.333975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerDied","Data":"c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.334521 4858 scope.go:117] "RemoveContainer" containerID="c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.349951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.369954 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.383754 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.395951 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.410845 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.427710 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.434207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.434250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.434262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.434278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.434290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.448492 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.463240 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.477481 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.493685 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.511063 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.522001 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.533106 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.537361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.537390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.537402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.537418 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.537430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.545324 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.558074 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.573366 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.586926 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.599208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:42Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.640852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.640880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.640891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.640905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.640915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.744259 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.744653 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.744775 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.744890 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.744980 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.856662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.856721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.856738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.856760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.856775 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.898883 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:42 crc kubenswrapper[4858]: E1205 13:57:42.899062 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.959876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.960042 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.960154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.960274 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:42 crc kubenswrapper[4858]: I1205 13:57:42.960430 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:42Z","lastTransitionTime":"2025-12-05T13:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.063370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.063424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.063434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.063449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.063459 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.165856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.165899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.165912 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.165929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.165940 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.268316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.268346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.268354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.268370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.268380 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.338624 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/0.log" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.338665 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerStarted","Data":"1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.353181 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.374679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.374792 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.374840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.374870 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.374885 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.376623 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.394255 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.411975 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.427928 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.440215 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.450194 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.459708 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.474666 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.477617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.477660 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.477670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.477685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.477694 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.496570 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.509480 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.524605 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.538948 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.556380 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.567059 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.579621 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.580218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.580253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.580264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.580279 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.580290 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.592479 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.608495 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:43Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.682392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.682432 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.682442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.682455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.682465 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.784615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.784668 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.784682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.784701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.784713 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.887626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.887709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.887722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.887738 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.887752 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.898925 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.899024 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.898966 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:43 crc kubenswrapper[4858]: E1205 13:57:43.899122 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:43 crc kubenswrapper[4858]: E1205 13:57:43.899198 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:43 crc kubenswrapper[4858]: E1205 13:57:43.899304 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.989575 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.989622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.989632 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.989648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:43 crc kubenswrapper[4858]: I1205 13:57:43.989659 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:43Z","lastTransitionTime":"2025-12-05T13:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.092026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.092067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.092079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.092093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.092104 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.194977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.195036 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.195047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.195066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.195079 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.296685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.296721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.296730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.296744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.296752 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.398880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.398914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.398921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.398934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.398942 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.501203 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.501255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.501263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.501276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.501286 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.603673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.603703 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.603711 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.603724 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.603734 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.705885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.705940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.705954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.705968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.705979 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.808058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.808089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.808099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.808112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.808121 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.898934 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:44 crc kubenswrapper[4858]: E1205 13:57:44.899049 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.910316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.910364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.910376 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.910397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:44 crc kubenswrapper[4858]: I1205 13:57:44.910411 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:44Z","lastTransitionTime":"2025-12-05T13:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.012782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.012902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.012916 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.012932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.012944 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.115243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.115293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.115352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.115381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.115397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.217749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.217785 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.217793 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.217806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.217856 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.320287 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.320317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.320327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.320339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.320348 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.424491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.424547 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.424566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.424589 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.424605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.527516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.527553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.527563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.527578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.527588 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.629654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.629695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.629705 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.629741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.629752 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.732059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.732108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.732117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.732134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.732145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.835073 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.835119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.835133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.835151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.835162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.898583 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.898659 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:45 crc kubenswrapper[4858]: E1205 13:57:45.898730 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.898757 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:45 crc kubenswrapper[4858]: E1205 13:57:45.898884 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:45 crc kubenswrapper[4858]: E1205 13:57:45.898947 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.936877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.936932 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.936940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.936955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:45 crc kubenswrapper[4858]: I1205 13:57:45.937336 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:45Z","lastTransitionTime":"2025-12-05T13:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.039505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.039542 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.039551 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.039565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.039575 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.142028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.142077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.142090 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.142110 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.142123 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.244804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.244873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.244885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.244902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.244915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.347339 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.347373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.347383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.347397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.347408 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.450041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.450134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.450145 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.450159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.450167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.552061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.552351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.552416 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.552486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.552543 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.655122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.655161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.655175 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.655193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.655205 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.758261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.758302 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.758314 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.758331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.758349 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.860813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.860871 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.860885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.860901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.860915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.898326 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:46 crc kubenswrapper[4858]: E1205 13:57:46.898663 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.963808 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.963883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.963895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.963910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:46 crc kubenswrapper[4858]: I1205 13:57:46.963938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:46Z","lastTransitionTime":"2025-12-05T13:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.066377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.066431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.066441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.066453 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.066478 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.169899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.169943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.169952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.169966 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.169975 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.272247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.272283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.272292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.272312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.272327 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.374603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.374637 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.374648 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.374663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.374675 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.477904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.477953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.477964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.477980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.477990 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.580943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.580984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.580998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.581014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.581026 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.683768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.683879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.683896 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.683917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.683932 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.786020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.786064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.786074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.786087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.786097 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.889317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.889354 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.889364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.889378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.889388 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:47Z","lastTransitionTime":"2025-12-05T13:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.899103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.899241 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:47 crc kubenswrapper[4858]: I1205 13:57:47.899111 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:47 crc kubenswrapper[4858]: E1205 13:57:47.899426 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:47 crc kubenswrapper[4858]: E1205 13:57:47.899509 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:47 crc kubenswrapper[4858]: E1205 13:57:47.899584 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.006033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.006066 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.006074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.006087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.006095 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.109168 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.109219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.109227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.109242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.109253 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.211985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.212020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.212028 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.212041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.212053 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.314238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.314312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.314322 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.314334 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.314342 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.417522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.417558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.417566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.417582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.417592 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.520469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.520506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.520517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.520532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.520545 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.623013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.623051 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.623060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.623075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.623085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.725699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.725740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.725749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.725767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.725780 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.827847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.827884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.827892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.827905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.827915 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.898331 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:48 crc kubenswrapper[4858]: E1205 13:57:48.898818 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.899113 4858 scope.go:117] "RemoveContainer" containerID="d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.929392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.929436 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.929448 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.929463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.929476 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.956958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.956990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.957001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.957041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.957051 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: E1205 13:57:48.970303 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:48Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.973734 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.973758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.973765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.973777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.973789 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:48 crc kubenswrapper[4858]: E1205 13:57:48.985519 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:48Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.989171 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.989320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.989415 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.989504 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:48 crc kubenswrapper[4858]: I1205 13:57:48.989582 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:48Z","lastTransitionTime":"2025-12-05T13:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.003195 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:49Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.006787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.006945 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.007026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.007097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.007154 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.017616 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:49Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.021013 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.021048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.021057 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.021072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.021111 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.033134 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:49Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.033301 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.034951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.035000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.035011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.035024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.035034 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.137586 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.137806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.137950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.138074 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.138195 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.240858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.241148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.241248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.241391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.241491 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.344163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.344413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.344517 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.344594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.344658 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.446635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.447272 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.447363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.447442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.447511 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.550406 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.550447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.550458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.550474 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.550485 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.653149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.653187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.653197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.653213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.653226 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.755443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.755472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.755487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.755508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.755519 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.857689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.857993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.858062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.858136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.858220 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.899309 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.899335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.899694 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.899378 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.899975 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:49 crc kubenswrapper[4858]: E1205 13:57:49.899789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.961405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.961465 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.961477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.961496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:49 crc kubenswrapper[4858]: I1205 13:57:49.961506 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:49Z","lastTransitionTime":"2025-12-05T13:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.066455 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.066495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.066505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.066522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.066533 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.169336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.169384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.169396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.169408 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.169432 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.271936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.271988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.271997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.272014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.272026 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.359987 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/2.log" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.363095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.363497 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.374058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.374093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.374101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.374115 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.374125 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.375261 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.388417 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.414710 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.425785 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.437136 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.449977 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.467123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.476396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.476424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.476431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.476444 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.476453 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.477208 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.489313 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.500363 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.514212 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.525463 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.535302 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.549346 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.560094 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.572794 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.578905 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.578941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.578950 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.578965 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.578974 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.584480 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.592864 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:50Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.681659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.681721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.681733 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.681751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.681765 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.784085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.784142 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.784154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.784200 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.784215 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.885954 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.885992 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.886000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.886014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.886023 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.899159 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:50 crc kubenswrapper[4858]: E1205 13:57:50.899309 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.989029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.989093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.989106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.989122 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:50 crc kubenswrapper[4858]: I1205 13:57:50.989134 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:50Z","lastTransitionTime":"2025-12-05T13:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.091437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.091481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.091492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.091507 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.091517 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.193955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.194010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.194031 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.194050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.194062 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.296534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.296567 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.296580 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.296597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.296609 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.368564 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/3.log" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.370560 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/2.log" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.374193 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" exitCode=1 Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.374269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.374361 4858 scope.go:117] "RemoveContainer" containerID="d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.375503 4858 scope.go:117] "RemoveContainer" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" Dec 05 13:57:51 crc kubenswrapper[4858]: E1205 13:57:51.375872 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.391296 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.399204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.399242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.399255 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.399271 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.399284 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.406648 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.421037 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.438164 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.456534 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:50Z\\\",\\\"message\\\":\\\"w:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 13:57:50.614563 6740 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.469184 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.481623 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.497267 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.502290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.502331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.502343 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.502357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.502367 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.511970 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.530042 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.541796 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.552298 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.566288 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.581068 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.593676 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.603783 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.604652 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.604682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.604690 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.604706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.604717 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.616470 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.635234 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.707101 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.707132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.707141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.707157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.707167 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.808947 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.808983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.808993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.809010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.809020 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.899013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:51 crc kubenswrapper[4858]: E1205 13:57:51.899157 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.899276 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:51 crc kubenswrapper[4858]: E1205 13:57:51.899402 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.899416 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:51 crc kubenswrapper[4858]: E1205 13:57:51.899505 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.911910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.911964 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.911973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.911985 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.911996 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:51Z","lastTransitionTime":"2025-12-05T13:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.913922 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.925320 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.936529 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.951577 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.972514 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6e4c0a1d6c4ad9bc03f930fc4fca7019adcf6df3e136adc36601d4d65d79a90\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:23Z\\\",\\\"message\\\":\\\"olver-d85q7 in node crc\\\\nI1205 13:57:22.795969 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-d85q7 after 0 failed attempt(s)\\\\nI1205 13:57:22.795974 6390 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-d85q7\\\\nI1205 13:57:22.795161 6390 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1205 13:57:22.795989 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1205 13:57:22.795996 6390 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1205 13:57:22.795727 6390 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796009 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1205 13:57:22.796010 6390 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-5jh87\\\\nI1205 13:57:22.796025 6390 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI1205 13:57:22.795717 6390 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1205 13:57:22.796033 6390 obj_retry.go:386] Retry successful for *v1.Pod openshift-k\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:50Z\\\",\\\"message\\\":\\\"w:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 13:57:50.614563 6740 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.982290 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:51 crc kubenswrapper[4858]: I1205 13:57:51.995287 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:51Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.010929 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.013759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.013841 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.013853 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.013868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.013880 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.022392 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.034412 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.045539 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.054928 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.067565 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.081655 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.092744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.101501 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.113439 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.115902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.115934 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.115946 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.115960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.115971 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.131045 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.217472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.217510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.217519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.217531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.217540 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.319410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.319443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.319450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.319463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.319471 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.378736 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/3.log" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.382094 4858 scope.go:117] "RemoveContainer" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" Dec 05 13:57:52 crc kubenswrapper[4858]: E1205 13:57:52.382334 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.393616 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.405521 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.420123 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.421210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.421234 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.421242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.421254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.421263 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.428677 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.437464 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.448554 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.467042 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.477994 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.487367 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.502720 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.520511 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:50Z\\\",\\\"message\\\":\\\"w:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 13:57:50.614563 6740 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.523537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.523572 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.523582 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.523594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.523602 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.530294 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.542895 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.552734 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.566107 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.578233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.588882 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.610977 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:52Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.625713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.625745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.625753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.625766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.625776 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.727925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.727967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.727976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.727989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.727999 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.830173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.830214 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.830225 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.830241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.830252 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.898377 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:52 crc kubenswrapper[4858]: E1205 13:57:52.898508 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.932353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.932609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.932689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.932754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:52 crc kubenswrapper[4858]: I1205 13:57:52.932841 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:52Z","lastTransitionTime":"2025-12-05T13:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.034864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.034915 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.034925 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.034942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.034955 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.137396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.137437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.137446 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.137461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.137472 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.241236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.241485 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.241554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.241615 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.241708 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.344209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.344248 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.344256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.344268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.344277 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.446655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.446691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.446701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.446715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.446725 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.548736 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.548774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.548784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.548797 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.548807 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.650799 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.650865 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.650874 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.650888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.650899 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.753634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.753670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.753679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.753692 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.753700 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.856424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.856483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.856505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.856537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.856559 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.898742 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.898885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:53 crc kubenswrapper[4858]: E1205 13:57:53.899017 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.899038 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:53 crc kubenswrapper[4858]: E1205 13:57:53.899162 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:53 crc kubenswrapper[4858]: E1205 13:57:53.899263 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.959759 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.959868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.959891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.959918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:53 crc kubenswrapper[4858]: I1205 13:57:53.959938 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:53Z","lastTransitionTime":"2025-12-05T13:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.062410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.062460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.062471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.062489 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.062502 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.165026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.165055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.165065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.165077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.165090 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.268384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.268419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.268430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.268447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.268458 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.371029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.371080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.371097 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.371120 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.371137 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.473784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.473840 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.473849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.473862 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.473872 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.575758 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.575796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.575805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.575819 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.575844 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.678699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.678750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.678760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.678777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.678788 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.782239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.782318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.782336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.782366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.782382 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.884639 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.884677 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.884687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.884699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.884709 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.898187 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:54 crc kubenswrapper[4858]: E1205 13:57:54.898312 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.987307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.987587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.987595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.987608 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:54 crc kubenswrapper[4858]: I1205 13:57:54.987617 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:54Z","lastTransitionTime":"2025-12-05T13:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.089839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.089880 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.089893 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.089910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.089922 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.193107 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.193139 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.193148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.193164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.193180 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.295466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.295500 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.295508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.295520 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.295528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.397851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.397907 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.397924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.397935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.397945 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.500327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.500364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.500375 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.500391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.500403 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.602998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.603081 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.603096 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.603113 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.603126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.696977 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.697061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.697083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697135 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.69710543 +0000 UTC m=+148.244703569 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697168 4858 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697170 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697198 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697212 4858 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.697209 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697253 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.697237914 +0000 UTC m=+148.244836053 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697269 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.697264204 +0000 UTC m=+148.244862343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.697285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697356 4858 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697385 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.697378007 +0000 UTC m=+148.244976146 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697421 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697464 4858 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697479 4858 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.697536 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.697524381 +0000 UTC m=+148.245122600 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.705058 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.705085 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.705093 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.705105 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.705115 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.808532 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.808565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.808576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.808596 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.808608 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.899312 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.899421 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.899593 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.899653 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.899794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:55 crc kubenswrapper[4858]: E1205 13:57:55.899887 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.910592 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.910623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.910633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.910646 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:55 crc kubenswrapper[4858]: I1205 13:57:55.910657 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:55Z","lastTransitionTime":"2025-12-05T13:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.012689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.012720 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.012731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.012745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.012756 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.115495 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.115540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.115549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.115564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.115578 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.217424 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.217454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.217462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.217473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.217482 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.320421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.320451 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.320461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.320473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.320482 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.423114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.423153 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.423164 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.423180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.423192 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.525879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.525919 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.525927 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.525942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.525952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.628026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.628047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.628055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.628067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.628076 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.729523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.729587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.729597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.729612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.729620 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.832299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.832336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.832349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.832365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.832377 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.898750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:56 crc kubenswrapper[4858]: E1205 13:57:56.898899 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.934655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.934721 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.934731 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.934745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:56 crc kubenswrapper[4858]: I1205 13:57:56.934758 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:56Z","lastTransitionTime":"2025-12-05T13:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.037873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.037917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.037929 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.037944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.037957 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.140397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.140437 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.140445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.140459 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.140468 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.242622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.242911 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.242998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.243084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.243202 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.345663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.345702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.345713 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.345728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.345739 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.448939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.448988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.448999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.449015 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.449028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.551935 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.552012 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.552027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.552045 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.552054 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.654062 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.654114 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.654125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.654162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.654174 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.792189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.792475 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.792538 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.792610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.792668 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.895886 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.895957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.895970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.895989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.896002 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.899145 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.899264 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.899160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:57 crc kubenswrapper[4858]: E1205 13:57:57.899485 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:57 crc kubenswrapper[4858]: E1205 13:57:57.899401 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:57 crc kubenswrapper[4858]: E1205 13:57:57.899294 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.998534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.998909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.999002 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.999091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:57 crc kubenswrapper[4858]: I1205 13:57:57.999236 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:57Z","lastTransitionTime":"2025-12-05T13:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.102064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.102125 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.102136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.102159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.102172 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.205239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.205497 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.205563 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.205635 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.205706 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.307967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.308010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.308023 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.308040 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.308053 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.409802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.409847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.409858 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.409872 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.409883 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.512357 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.512392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.512400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.512412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.512421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.615006 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.615060 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.615070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.615087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.615099 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.717329 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.717355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.717373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.717387 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.717397 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.819151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.819188 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.819199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.819213 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.819222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.898868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:57:58 crc kubenswrapper[4858]: E1205 13:57:58.899107 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.912575 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.921421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.921466 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.921478 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.921492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:58 crc kubenswrapper[4858]: I1205 13:57:58.921501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:58Z","lastTransitionTime":"2025-12-05T13:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.024091 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.024157 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.024178 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.024196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.024230 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.126473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.126510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.126534 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.126552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.126562 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.229197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.229239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.229247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.229278 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.229289 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.315181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.315443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.315531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.315613 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.315677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.330201 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.333034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.333055 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.333063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.333076 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.333085 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.342581 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.345320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.345346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.345355 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.345368 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.345378 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.354494 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.357317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.357364 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.357372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.357384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.357392 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.366905 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.369312 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.369362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.369371 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.369383 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.369394 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.379160 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:59Z is after 2025-08-24T17:21:41Z" Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.379277 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.380645 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.380672 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.380681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.380695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.380705 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.483151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.483187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.483197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.483211 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.483222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.585242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.585564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.585665 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.585751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.585813 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.688353 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.688391 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.688400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.688412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.688421 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.790654 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.790745 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.790760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.790774 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.790800 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.893268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.893318 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.893330 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.893347 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.893360 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.898522 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.898636 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.898888 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.898952 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.899102 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:57:59 crc kubenswrapper[4858]: E1205 13:57:59.899178 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.996345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.996398 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.996420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.996447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:57:59 crc kubenswrapper[4858]: I1205 13:57:59.996466 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:57:59Z","lastTransitionTime":"2025-12-05T13:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.099644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.099714 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.099729 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.099753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.099767 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.202859 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.202901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.202910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.202941 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.202953 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.305725 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.305761 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.305770 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.305784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.305794 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.408857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.408939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.408963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.408987 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.409003 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.511699 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.511766 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.511780 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.511802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.511842 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.615204 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.615237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.615247 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.615261 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.615272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.717206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.717257 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.717273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.717293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.717345 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.819460 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.819494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.819505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.819522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.819534 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.899067 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:00 crc kubenswrapper[4858]: E1205 13:58:00.899221 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.922530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.922564 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.922576 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.922593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:00 crc kubenswrapper[4858]: I1205 13:58:00.922605 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:00Z","lastTransitionTime":"2025-12-05T13:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.026087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.026156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.026181 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.026217 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.026244 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.130177 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.130753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.130767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.130804 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.130816 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.236106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.236166 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.236180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.236206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.236222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.339282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.339331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.339346 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.339362 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.339376 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.441958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.442005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.442017 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.442032 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.442041 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.544268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.544296 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.544304 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.544316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.544326 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.646324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.646370 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.646378 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.646392 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.646401 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.749209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.749283 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.749293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.749308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.749318 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.851844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.852352 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.852443 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.852531 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.852611 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.898667 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:01 crc kubenswrapper[4858]: E1205 13:58:01.898771 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.898964 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:01 crc kubenswrapper[4858]: E1205 13:58:01.899061 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.898965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:01 crc kubenswrapper[4858]: E1205 13:58:01.899189 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.916868 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.925478 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3124024-a408-41a9-a2d5-c839063bbb73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://364129393fe733afe95e5aca07c0ff9db100dcedab449f4f50db499b90046a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ee1e3e588b099ea3b0edf02ba290d666b2ce1625c5f39e3d14e8658816373a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ee1e3e588b099ea3b0edf02ba290d666b2ce1625c5f39e3d14e8658816373a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.935879 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.943872 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.953706 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.955024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.955137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.955222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.955320 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.955402 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:01Z","lastTransitionTime":"2025-12-05T13:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.963857 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.976393 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.987567 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:01 crc kubenswrapper[4858]: I1205 13:58:01.998736 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:01Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.010332 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.025140 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.042918 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:50Z\\\",\\\"message\\\":\\\"w:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 13:57:50.614563 6740 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.054664 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.057691 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.057719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.057728 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.057741 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.057750 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.065730 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.076330 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.087808 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.099592 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.109558 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.120639 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:02Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.159856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.159895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.159903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.159918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.159928 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.261670 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.261706 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.261716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.261730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.261742 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.364384 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.364429 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.364439 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.364452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.364464 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.466634 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.466675 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.466686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.466700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.466711 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.570129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.570183 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.570194 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.570210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.570222 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.673027 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.673088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.673099 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.673117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.673128 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.777884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.777960 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.778054 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.778084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.778101 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.880952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.881014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.881033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.881059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.881074 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.899274 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:02 crc kubenswrapper[4858]: E1205 13:58:02.899444 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.983385 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.983428 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.983440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.983454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:02 crc kubenswrapper[4858]: I1205 13:58:02.983465 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:02Z","lastTransitionTime":"2025-12-05T13:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.085976 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.086034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.086047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.086064 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.086075 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.188844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.188901 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.188917 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.188942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.188992 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.291782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.291868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.291881 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.291898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.291910 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.395205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.395240 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.395249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.395264 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.395274 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.498743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.498803 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.498816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.498857 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.498867 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.602047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.602108 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.602121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.602144 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.602158 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.705813 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.705876 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.705888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.705909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.705922 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.808887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.808928 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.809133 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.809156 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.809177 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.898662 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.898775 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.898908 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:03 crc kubenswrapper[4858]: E1205 13:58:03.898917 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:03 crc kubenswrapper[4858]: E1205 13:58:03.899073 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:03 crc kubenswrapper[4858]: E1205 13:58:03.899186 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.911431 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.911491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.911505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.911522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:03 crc kubenswrapper[4858]: I1205 13:58:03.911531 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:03Z","lastTransitionTime":"2025-12-05T13:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.014170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.014224 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.014238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.014260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.014273 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.117377 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.117430 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.117442 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.117462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.117476 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.220689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.220743 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.220756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.220777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.220791 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.324529 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.324598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.324612 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.324630 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.324645 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.428024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.428094 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.428112 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.428134 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.428151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.530535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.530581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.530594 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.530610 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.530622 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.632664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.632715 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.632730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.632747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.632757 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.734931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.734968 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.734984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.735000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.735010 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.837350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.837440 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.837450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.837462 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.837474 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.898564 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:04 crc kubenswrapper[4858]: E1205 13:58:04.898692 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.939702 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.939753 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.939765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.939787 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:04 crc kubenswrapper[4858]: I1205 13:58:04.939798 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:04Z","lastTransitionTime":"2025-12-05T13:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.042128 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.042511 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.042536 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.042552 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.042564 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.144535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.144565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.144573 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.144585 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.144593 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.247089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.247116 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.247124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.247136 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.247145 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.349313 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.349341 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.349349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.349361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.349369 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.452033 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.452072 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.452084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.452100 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.452110 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.554722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.554767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.554779 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.554796 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.554808 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.657282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.657333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.657345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.657363 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.657374 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.759837 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.759879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.759888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.759902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.759911 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.862295 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.862400 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.862412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.862425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.862436 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.899032 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.899066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.899032 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:05 crc kubenswrapper[4858]: E1205 13:58:05.899150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:05 crc kubenswrapper[4858]: E1205 13:58:05.899234 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:05 crc kubenswrapper[4858]: E1205 13:58:05.899312 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.964845 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.964879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.964888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.964902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:05 crc kubenswrapper[4858]: I1205 13:58:05.964912 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:05Z","lastTransitionTime":"2025-12-05T13:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.068220 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.068268 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.068284 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.068303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.068315 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.171518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.171579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.171598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.171623 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.171643 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.274231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.274276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.274285 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.274300 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.274313 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.377307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.377356 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.377365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.377381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.377393 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.480527 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.480581 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.480590 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.480609 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.480621 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.584454 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.584508 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.584519 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.584549 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.584563 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.686628 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.687208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.687231 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.687260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.687280 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.790856 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.790940 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.790961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.790990 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.791009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.893218 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.893253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.893260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.893275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.893292 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.898652 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:06 crc kubenswrapper[4858]: E1205 13:58:06.898790 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.996198 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.996241 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.996252 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.996270 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:06 crc kubenswrapper[4858]: I1205 13:58:06.996282 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:06Z","lastTransitionTime":"2025-12-05T13:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.098546 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.098593 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.098603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.098617 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.098629 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.201530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.201602 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.201624 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.201657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.201684 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.304676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.304771 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.304795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.304873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.304903 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.407921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.407962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.407972 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.407988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.408002 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.509924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.509967 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.509980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.509998 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.510011 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.612655 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.612680 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.612689 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.612701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.612710 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.714995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.715061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.715075 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.715098 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.715113 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.817386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.817445 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.817458 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.817473 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.817484 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.898958 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.899002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.899031 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:07 crc kubenswrapper[4858]: E1205 13:58:07.899095 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:07 crc kubenswrapper[4858]: E1205 13:58:07.899191 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:07 crc kubenswrapper[4858]: E1205 13:58:07.899586 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.899866 4858 scope.go:117] "RemoveContainer" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" Dec 05 13:58:07 crc kubenswrapper[4858]: E1205 13:58:07.900004 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.919626 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.919662 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.919671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.919685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:07 crc kubenswrapper[4858]: I1205 13:58:07.919695 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:07Z","lastTransitionTime":"2025-12-05T13:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.021955 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.021991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.021999 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.022011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.022020 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.124148 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.124189 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.124207 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.124222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.124232 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.226449 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.226481 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.226490 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.226503 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.226512 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.329496 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.329561 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.329579 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.329601 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.329618 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.431506 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.431537 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.431545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.431558 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.431566 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.533722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.533754 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.533762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.533777 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.533790 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.636441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.636471 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.636479 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.636494 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.636503 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.739768 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.739895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.739914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.739944 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.739963 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.842719 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.843226 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.843417 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.843578 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.843721 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.898496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:08 crc kubenswrapper[4858]: E1205 13:58:08.898717 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.947084 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.947119 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.947130 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.947149 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:08 crc kubenswrapper[4858]: I1205 13:58:08.947162 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:08Z","lastTransitionTime":"2025-12-05T13:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.049903 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.049948 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.049961 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.049983 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.049998 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.153038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.153118 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.153137 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.153165 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.153186 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.261790 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.261875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.261889 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.261909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.261945 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.364781 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.364842 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.364860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.364883 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.364896 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.467811 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.467877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.467888 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.467904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.467917 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.570336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.570386 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.570397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.570414 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.570425 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.673034 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.673067 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.673079 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.673095 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.673106 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.765256 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.765321 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.765336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.765360 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.765377 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.779383 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:09Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.782657 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.782693 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.782701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.782718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.782728 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.795635 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:09Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.799372 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.799393 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.799401 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.799413 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.799422 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.812432 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:09Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.817784 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.817844 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.817860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.817882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.817898 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.828927 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:09Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.833172 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.833221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.833239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.833260 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.833273 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.846233 4858 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T13:58:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74cf7700-2214-426c-b823-5d8073a4da4d\\\",\\\"systemUUID\\\":\\\"15431bde-3216-4207-8a7b-b80a053431b8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:09Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.846374 4858 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.848565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.848606 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.848618 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.848638 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.848651 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.898553 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.898793 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.899028 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.899167 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.899354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:09 crc kubenswrapper[4858]: E1205 13:58:09.899401 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.950722 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.950756 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.950765 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.950778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:09 crc kubenswrapper[4858]: I1205 13:58:09.950787 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:09Z","lastTransitionTime":"2025-12-05T13:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.053700 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.053740 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.053751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.053767 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.053778 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.155991 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.156021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.156029 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.156041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.156049 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.258751 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.258795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.258806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.258839 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.258855 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.361273 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.361299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.361306 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.361317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.361326 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.463522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.463749 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.463882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.463963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.464028 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.566227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.566286 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.566299 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.566315 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.566328 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.667988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.668026 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.668035 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.668047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.668056 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.771382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.771676 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.771802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.772001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.772182 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.874540 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.874577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.874587 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.874603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.874618 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.898639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:10 crc kubenswrapper[4858]: E1205 13:58:10.898802 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.976795 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.976851 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.976860 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.976873 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:10 crc kubenswrapper[4858]: I1205 13:58:10.976884 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:10Z","lastTransitionTime":"2025-12-05T13:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.079250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.079293 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.079311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.079333 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.079345 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.181381 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.181663 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.181748 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.181884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.181982 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.284852 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.284887 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.284898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.284914 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.284926 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.387434 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.387469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.387477 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.387491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.387502 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.466187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:11 crc kubenswrapper[4858]: E1205 13:58:11.466462 4858 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:58:11 crc kubenswrapper[4858]: E1205 13:58:11.466553 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs podName:6197c8ee-275b-44dd-b402-e4b8039c4997 nodeName:}" failed. No retries permitted until 2025-12-05 13:59:15.466536625 +0000 UTC m=+164.014134754 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs") pod "network-metrics-daemon-5jh87" (UID: "6197c8ee-275b-44dd-b402-e4b8039c4997") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.490038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.490078 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.490087 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.490109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.490126 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.593327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.593390 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.593402 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.593421 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.593432 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.695159 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.695410 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.695491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.695597 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.695677 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.798659 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.798695 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.798704 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.798718 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.798729 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.899150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.899240 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.899304 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:11 crc kubenswrapper[4858]: E1205 13:58:11.899283 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:11 crc kubenswrapper[4858]: E1205 13:58:11.899391 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:11 crc kubenswrapper[4858]: E1205 13:58:11.899455 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.900956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.900980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.900988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.901000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.901015 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:11Z","lastTransitionTime":"2025-12-05T13:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.910650 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-87w6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a69d20a-c80f-4814-9cf2-fce9ade638c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a1a631549c5da6ea507d9e4db8632ea021515bab59c1f0f4d704bf4795897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnx5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-87w6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.921165 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjdj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19dac4e8-493c-456c-b8ea-cc1e48b9867c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:41Z\\\",\\\"message\\\":\\\"2025-12-05T13:56:54+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79\\\\n2025-12-05T13:56:54+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1e345b3b-804e-4faf-aea4-3d84839f9b79 to /host/opt/cni/bin/\\\\n2025-12-05T13:56:55Z [verbose] multus-daemon started\\\\n2025-12-05T13:56:55Z [verbose] Readiness Indicator file check\\\\n2025-12-05T13:57:40Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l54d2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjdj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.937838 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675851e1-3326-430c-b2cc-e4347c34e16d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35efc9a3ed384d21fd7421ed67b2ebd927a5c4c41e3bfd4a7e2a99bc13c68cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d271fa0840d2cf88379b2f99948884e9adf9dd42bd352fe624af58802a44670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a59fe583cd40b40bbed5c9cc2b4c8d28fe7026e81ed92ecac2408fe3aba993d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://705535abc28bdab8d4f15d679907d295d778991e75637105d585b1536f51b1ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e74140c2c90a6d68281e01dbd6c8051341bcf44766991104ea9cf5f39b2b3e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69793efab060bb3e710a2cebebd70438ae5cc5b69b93fa9ed35d243b7197e97c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6391b77a885f21c1e02721d3bbd38d836388dd44535b8a002978fe2ed48e2f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d9d485a3f2c180d02d0f6984e4d07f855c830e5b8ea02a2123230c230e13ea6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.947717 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3124024-a408-41a9-a2d5-c839063bbb73\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://364129393fe733afe95e5aca07c0ff9db100dcedab449f4f50db499b90046a1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45ee1e3e588b099ea3b0edf02ba290d666b2ce1625c5f39e3d14e8658816373a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45ee1e3e588b099ea3b0edf02ba290d666b2ce1625c5f39e3d14e8658816373a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.960322 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.972233 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ad7fdb1381b023033720493f38ca0be5b6591b2a9d9e460b80a0da57843864e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:11 crc kubenswrapper[4858]: I1205 13:58:11.986525 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b855b1c-b9bc-4249-80a9-87108585857f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a678119f02e7888384561f30fcc4dd57ffb4d448e99e9f03dabadc2d20523e26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58ee63d7e355433061b5f324e6f736ed6d2dfe21ea1969210a74c04836c65285\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd91fe57eb53e34ea64b3c9e21832485ec841c341bca56ea3fc443b869f735bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ca9fb3f3b15ecbb7f620324c3acc6c8cbbbb1d51daf85b6e4c759fd66a21a97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eca0a65d1e92dc96f902a9fa5abf3eafc1e341677b858fc99063ec8f7908bb0d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebd223c1e9e2fdcfe86a9812551cb92362144198337b43655999e1d08e269cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6891451a28fc0631046ff839712daa3e657c015d79efb38671f2e2693026601\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sr66j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-q8fqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:11Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.003992 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T13:57:50Z\\\",\\\"message\\\":\\\"w:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1205 13:57:50.614563 6740 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:57:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:57:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wl6f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jtntj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.005570 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.005633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.005644 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.005682 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.005695 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.015101 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5jh87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6197c8ee-275b-44dd-b402-e4b8039c4997\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mb8dw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:07Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5jh87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.026457 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09b3260-5282-40d6-a655-6aff613df0aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b50a643efcc2655aa9e3101b15cc2f24dc9ac70eabb50ecaa9595d1147e0879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba30db62f83c2241c2a888f7b3d2228b25c7a2ef98f4c5fd23edc7d9af2b55fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c15d6980cc91151c93928da5c5db8a71ac8827ffe6f4002e951e64fb4a585807\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.036267 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ae82b760-22fa-4c6a-9a79-ef1470efa29c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65f11f6d40d8871d6ef1689088574ec734b1fa60e283b8b9d53b50c676c8b1ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://623f1c2ae3fb10f9fffdf4009071d1ec9013129264051a33bc537c719949450c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d31c6004c742038f9c2eff257feb07383a37c2344aeb73293c97844472f41510\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5842ab5778eb875a3c70acb515b963cb2996c3459fbb21e5195a8ed4b3164606\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.047666 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://870776f11bb0daecfb2c3c7567db40705c033cabd3db3e7a6fcd2a3368f0618e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.059298 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.070586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.081187 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1aba3b1-5c58-4ce7-b3b3-d4fd0d940804\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0330088b8dc1ddbca0617e2c1acfd0d3934ad049daf6529a7dc9617e26ab609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b8b9721a5d909c93dd05ac6dc862e47a1248b22d7d74dfddd83b401f2c5c75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pl9vh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pkkmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.091586 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ab8742a-625e-4bb8-9329-31f39a34fe48\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0cfe918d3fbed96e0dc1f365e92c41d5fcdd8cecd59e01073791febef273f36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krnc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:53Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vtgkn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.104744 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ee8667d-c367-46b9-8b51-335c4325c6ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:57:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T13:56:51Z\\\",\\\"message\\\":\\\"ey\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1764942995\\\\\\\\\\\\\\\" (2025-12-05 13:56:35 +0000 UTC to 2026-01-04 13:56:36 +0000 UTC (now=2025-12-05 13:56:51.341134948 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.348989 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349019 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1205 13:56:51.349091 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764943011\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764943011\\\\\\\\\\\\\\\" (2025-12-05 12:56:50 +0000 UTC to 2026-12-05 12:56:50 +0000 UTC (now=2025-12-05 13:56:51.349069995 +0000 UTC))\\\\\\\"\\\\nI1205 13:56:51.349091 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1205 13:56:51.349116 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1205 13:56:51.349124 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1205 13:56:51.349151 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1205 13:56:51.349172 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1205 13:56:51.349348 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1205 13:56:51.349355 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1205 13:56:51.349383 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1205 13:56:51.349361 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T13:56:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T13:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.107953 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.107984 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.107995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.108010 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.108021 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.116281 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4db30783c1314c4f6f9c8710fbf48e522d7e26396fac5f7d059f6dcec05d628\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d32aed6c60b28e227703d4af869a9d62cd3ee13a86db2077b6f30e7fb9c7116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.125225 4858 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d85q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fdf51fde-d54f-4e8a-9a66-8abf33dce5e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T13:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8039fa0115236dce468cc26b62716533280c3b43269917b7650d383e56d496f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T13:56:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kzvnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T13:56:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d85q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T13:58:12Z is after 2025-08-24T17:21:41Z" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.210616 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.210664 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.210673 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.210686 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.210696 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.312491 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.312522 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.312530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.312562 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.312571 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.414939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.414973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.414982 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.414995 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.415003 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.518324 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.518595 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.518669 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.518730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.518787 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.622773 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.622817 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.622849 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.622868 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.622880 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.725327 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.725396 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.725405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.725419 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.725428 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.828910 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.829000 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.829024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.829056 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.829076 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.898927 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:12 crc kubenswrapper[4858]: E1205 13:58:12.899133 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.932311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.932345 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.932358 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.932373 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:12 crc kubenswrapper[4858]: I1205 13:58:12.932384 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:12Z","lastTransitionTime":"2025-12-05T13:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.034812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.034882 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.034891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.034904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.034929 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.137685 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.137744 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.137760 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.137782 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.137796 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.241209 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.241303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.241331 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.241366 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.241391 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.344936 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.345021 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.345039 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.345059 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.345095 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.449411 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.449530 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.449603 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.449642 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.449700 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.552812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.552864 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.552875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.552892 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.552903 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.660014 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.660092 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.660106 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.660126 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.660141 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.763007 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.763061 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.763070 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.763082 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.763091 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.866598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.866667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.866681 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.866697 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.866730 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.898634 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.898714 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.898753 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:13 crc kubenswrapper[4858]: E1205 13:58:13.898913 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:13 crc kubenswrapper[4858]: E1205 13:58:13.899042 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:13 crc kubenswrapper[4858]: E1205 13:58:13.899164 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.970024 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.970080 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.970088 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.970102 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:13 crc kubenswrapper[4858]: I1205 13:58:13.970112 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:13Z","lastTransitionTime":"2025-12-05T13:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.072918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.072979 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.072989 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.073004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.073013 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.178065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.178121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.178140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.178162 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.178185 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.282154 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.282219 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.282237 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.282275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.282298 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.385701 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.385776 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.385802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.385863 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.385888 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.489205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.489289 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.489316 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.489351 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.489452 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.594147 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.594195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.594210 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.594229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.594240 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.698170 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.698253 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.698276 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.698308 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.698329 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.800879 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.800970 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.801005 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.801038 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.801060 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.900174 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:14 crc kubenswrapper[4858]: E1205 13:58:14.900342 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.910173 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.910215 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.910227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.910244 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:14 crc kubenswrapper[4858]: I1205 13:58:14.910259 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:14Z","lastTransitionTime":"2025-12-05T13:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.013762 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.013802 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.013812 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.013895 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.013910 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.116861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.116900 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.116909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.116923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.116932 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.220447 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.220516 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.220528 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.220545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.220559 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.323778 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.323861 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.323875 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.323891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.323904 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.427801 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.427884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.427899 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.427918 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.428360 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.530382 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.530441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.530450 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.530464 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.530477 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.632361 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.632397 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.632405 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.632420 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.632428 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.734065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.734117 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.734132 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.734151 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.734166 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.836806 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.836909 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.836921 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.836939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.836952 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.898796 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.898862 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.898896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:15 crc kubenswrapper[4858]: E1205 13:58:15.898941 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:15 crc kubenswrapper[4858]: E1205 13:58:15.899121 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:15 crc kubenswrapper[4858]: E1205 13:58:15.899231 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.939452 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.939492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.939502 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.939518 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:15 crc kubenswrapper[4858]: I1205 13:58:15.939528 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:15Z","lastTransitionTime":"2025-12-05T13:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.042047 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.042109 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.042121 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.042140 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.042151 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.144412 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.144463 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.144472 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.144486 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.144495 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.246847 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.246885 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.246898 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.246913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.246925 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.349598 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.349694 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.349717 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.349747 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.349769 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.452877 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.452939 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.452958 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.452980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.452997 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.555577 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.555633 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.555671 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.555687 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.555698 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.658205 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.658239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.658249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.658262 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.658272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.760884 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.760931 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.760943 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.760957 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.760968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.863433 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.863461 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.863469 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.863492 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.863501 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.899202 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:16 crc kubenswrapper[4858]: E1205 13:58:16.899324 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.965913 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.965952 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.965962 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.965977 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:16 crc kubenswrapper[4858]: I1205 13:58:16.965986 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:16Z","lastTransitionTime":"2025-12-05T13:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.069622 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.069667 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.069679 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.069696 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.069708 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.172179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.172251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.172277 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.172311 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.172333 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.275161 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.275197 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.275208 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.275222 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.275233 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.377288 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.377317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.377326 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.377338 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.377346 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.480441 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.480483 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.480499 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.480523 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.480539 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.582956 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.582980 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.582988 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.583001 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.583009 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.685163 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.685239 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.685250 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.685263 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.685272 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.787554 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.787805 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.787924 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.787997 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.788059 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.890996 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.891053 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.891068 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.891089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.891104 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.898344 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.898394 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.898441 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:17 crc kubenswrapper[4858]: E1205 13:58:17.898455 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:17 crc kubenswrapper[4858]: E1205 13:58:17.898544 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:17 crc kubenswrapper[4858]: E1205 13:58:17.898588 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.993193 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.993243 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.993254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.993269 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:17 crc kubenswrapper[4858]: I1205 13:58:17.993284 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:17Z","lastTransitionTime":"2025-12-05T13:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.095227 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.095265 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.095275 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.095290 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.095301 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.196891 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.196923 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.196933 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.196951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.196963 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.299303 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.299337 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.299350 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.299365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.299377 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.404011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.404077 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.404089 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.404129 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.404143 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.507124 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.507180 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.507196 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.507249 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.507268 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.609242 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.609282 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.609292 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.609307 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.609316 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.711195 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.711229 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.711238 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.711254 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.711265 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.813684 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.813709 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.813716 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.813730 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.813738 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.898758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:18 crc kubenswrapper[4858]: E1205 13:58:18.898900 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.915963 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.916037 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.916050 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.916065 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:18 crc kubenswrapper[4858]: I1205 13:58:18.916077 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:18Z","lastTransitionTime":"2025-12-05T13:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.019425 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.019487 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.019509 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.019535 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.019556 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.122280 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.122317 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.122328 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.122344 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.122355 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.224510 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.224545 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.224553 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.224566 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.224577 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.331336 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.331816 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.331904 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.331973 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.332065 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.434788 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.435206 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.435365 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.435513 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.436071 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.539179 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.539221 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.539236 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.539251 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.539264 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.642143 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.642187 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.642199 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.642216 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.642229 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.743951 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.743993 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.744004 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.744020 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.744029 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.846565 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.846942 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.847141 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.847349 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.847491 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.898315 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.898335 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:19 crc kubenswrapper[4858]: E1205 13:58:19.898945 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.898372 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:19 crc kubenswrapper[4858]: E1205 13:58:19.899056 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:19 crc kubenswrapper[4858]: E1205 13:58:19.898808 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.950011 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.950041 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.950048 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.950063 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:19 crc kubenswrapper[4858]: I1205 13:58:19.950074 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:19Z","lastTransitionTime":"2025-12-05T13:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.002505 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.002750 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.002815 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.002902 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.002968 4858 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T13:58:20Z","lastTransitionTime":"2025-12-05T13:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.049558 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt"] Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.049996 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.051393 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.051490 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.051994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.054581 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.065854 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podStartSLOduration=88.065814646 podStartE2EDuration="1m28.065814646s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.065657972 +0000 UTC m=+108.613256121" watchObservedRunningTime="2025-12-05 13:58:20.065814646 +0000 UTC m=+108.613412785" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.102123 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.102098952 podStartE2EDuration="1m28.102098952s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.089999876 +0000 UTC m=+108.637598035" watchObservedRunningTime="2025-12-05 13:58:20.102098952 +0000 UTC m=+108.649697131" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.113893 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d85q7" podStartSLOduration=88.113872598 podStartE2EDuration="1m28.113872598s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.113844657 +0000 UTC m=+108.661442816" watchObservedRunningTime="2025-12-05 13:58:20.113872598 +0000 UTC m=+108.661470737" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.122600 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-87w6x" podStartSLOduration=88.122586483 podStartE2EDuration="1m28.122586483s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.122270253 +0000 UTC m=+108.669868402" watchObservedRunningTime="2025-12-05 13:58:20.122586483 +0000 UTC m=+108.670184612" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.167255 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fjdj6" podStartSLOduration=88.167235923 podStartE2EDuration="1m28.167235923s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.137564844 +0000 UTC m=+108.685162993" watchObservedRunningTime="2025-12-05 13:58:20.167235923 +0000 UTC m=+108.714834072" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.168082 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.168074595 podStartE2EDuration="1m26.168074595s" podCreationTimestamp="2025-12-05 13:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.166530473 +0000 UTC m=+108.714128652" watchObservedRunningTime="2025-12-05 13:58:20.168074595 +0000 UTC m=+108.715672744" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.172087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3383b882-9d7d-45fc-a73a-62a29ebae029-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.172316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3383b882-9d7d-45fc-a73a-62a29ebae029-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.172448 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3383b882-9d7d-45fc-a73a-62a29ebae029-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.172556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3383b882-9d7d-45fc-a73a-62a29ebae029-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.172679 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3383b882-9d7d-45fc-a73a-62a29ebae029-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.180507 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.180488698 podStartE2EDuration="22.180488698s" podCreationTimestamp="2025-12-05 13:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.180436397 +0000 UTC m=+108.728034546" watchObservedRunningTime="2025-12-05 13:58:20.180488698 +0000 UTC m=+108.728086837" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.220228 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-q8fqr" podStartSLOduration=88.220211026 podStartE2EDuration="1m28.220211026s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.219477116 +0000 UTC m=+108.767075255" watchObservedRunningTime="2025-12-05 13:58:20.220211026 +0000 UTC m=+108.767809165" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.273680 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3383b882-9d7d-45fc-a73a-62a29ebae029-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.273775 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3383b882-9d7d-45fc-a73a-62a29ebae029-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.273799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3383b882-9d7d-45fc-a73a-62a29ebae029-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.273816 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3383b882-9d7d-45fc-a73a-62a29ebae029-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.273813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3383b882-9d7d-45fc-a73a-62a29ebae029-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.273938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3383b882-9d7d-45fc-a73a-62a29ebae029-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.274017 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3383b882-9d7d-45fc-a73a-62a29ebae029-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.274957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3383b882-9d7d-45fc-a73a-62a29ebae029-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.287759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3383b882-9d7d-45fc-a73a-62a29ebae029-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.288359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3383b882-9d7d-45fc-a73a-62a29ebae029-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xfmrt\" (UID: \"3383b882-9d7d-45fc-a73a-62a29ebae029\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.293052 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.293034953 podStartE2EDuration="1m29.293034953s" podCreationTimestamp="2025-12-05 13:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.268070323 +0000 UTC m=+108.815668472" watchObservedRunningTime="2025-12-05 13:58:20.293034953 +0000 UTC m=+108.840633092" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.293570 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=60.293567048 podStartE2EDuration="1m0.293567048s" podCreationTimestamp="2025-12-05 13:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.292818917 +0000 UTC m=+108.840417056" watchObservedRunningTime="2025-12-05 13:58:20.293567048 +0000 UTC m=+108.841165187" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.348162 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pkkmh" podStartSLOduration=87.348144754 podStartE2EDuration="1m27.348144754s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:20.347056015 +0000 UTC m=+108.894654154" watchObservedRunningTime="2025-12-05 13:58:20.348144754 +0000 UTC m=+108.895742893" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.386697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.471428 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" event={"ID":"3383b882-9d7d-45fc-a73a-62a29ebae029","Type":"ContainerStarted","Data":"fbe8fd8adfce84c3770b9c3832561d71e967e5d7d889d23960705e318e8451fb"} Dec 05 13:58:20 crc kubenswrapper[4858]: I1205 13:58:20.898477 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:20 crc kubenswrapper[4858]: E1205 13:58:20.898672 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:21 crc kubenswrapper[4858]: I1205 13:58:21.476026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" event={"ID":"3383b882-9d7d-45fc-a73a-62a29ebae029","Type":"ContainerStarted","Data":"aab1bd8807a14ac2201752698111ffe158dcbf76b8fadcfa73cfa7448d3fea71"} Dec 05 13:58:21 crc kubenswrapper[4858]: I1205 13:58:21.491028 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xfmrt" podStartSLOduration=89.491011448 podStartE2EDuration="1m29.491011448s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:21.488675485 +0000 UTC m=+110.036273634" watchObservedRunningTime="2025-12-05 13:58:21.491011448 +0000 UTC m=+110.038609587" Dec 05 13:58:21 crc kubenswrapper[4858]: I1205 13:58:21.898639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:21 crc kubenswrapper[4858]: I1205 13:58:21.898639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:21 crc kubenswrapper[4858]: I1205 13:58:21.898940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:21 crc kubenswrapper[4858]: E1205 13:58:21.901263 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:21 crc kubenswrapper[4858]: E1205 13:58:21.901579 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:21 crc kubenswrapper[4858]: E1205 13:58:21.903719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:22 crc kubenswrapper[4858]: I1205 13:58:22.899114 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:22 crc kubenswrapper[4858]: E1205 13:58:22.899228 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:22 crc kubenswrapper[4858]: I1205 13:58:22.900159 4858 scope.go:117] "RemoveContainer" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" Dec 05 13:58:22 crc kubenswrapper[4858]: E1205 13:58:22.900377 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jtntj_openshift-ovn-kubernetes(e675fbac-caa5-466d-92d2-e7c6f0dd0d5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" Dec 05 13:58:23 crc kubenswrapper[4858]: I1205 13:58:23.898369 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:23 crc kubenswrapper[4858]: I1205 13:58:23.898369 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:23 crc kubenswrapper[4858]: E1205 13:58:23.898851 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:23 crc kubenswrapper[4858]: E1205 13:58:23.898935 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:23 crc kubenswrapper[4858]: I1205 13:58:23.898520 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:23 crc kubenswrapper[4858]: E1205 13:58:23.899009 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:24 crc kubenswrapper[4858]: I1205 13:58:24.898969 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:24 crc kubenswrapper[4858]: E1205 13:58:24.899100 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:25 crc kubenswrapper[4858]: I1205 13:58:25.898219 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:25 crc kubenswrapper[4858]: E1205 13:58:25.898538 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:25 crc kubenswrapper[4858]: I1205 13:58:25.898380 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:25 crc kubenswrapper[4858]: E1205 13:58:25.898744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:25 crc kubenswrapper[4858]: I1205 13:58:25.898227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:25 crc kubenswrapper[4858]: E1205 13:58:25.898943 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:26 crc kubenswrapper[4858]: I1205 13:58:26.898286 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:26 crc kubenswrapper[4858]: E1205 13:58:26.898420 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:27 crc kubenswrapper[4858]: I1205 13:58:27.899236 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:27 crc kubenswrapper[4858]: E1205 13:58:27.899376 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:27 crc kubenswrapper[4858]: I1205 13:58:27.899456 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:27 crc kubenswrapper[4858]: I1205 13:58:27.899250 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:27 crc kubenswrapper[4858]: E1205 13:58:27.899702 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:27 crc kubenswrapper[4858]: E1205 13:58:27.900043 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:28 crc kubenswrapper[4858]: I1205 13:58:28.899196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:28 crc kubenswrapper[4858]: E1205 13:58:28.899349 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:29 crc kubenswrapper[4858]: I1205 13:58:29.898907 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:29 crc kubenswrapper[4858]: I1205 13:58:29.898911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:29 crc kubenswrapper[4858]: E1205 13:58:29.899097 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:29 crc kubenswrapper[4858]: E1205 13:58:29.899171 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:29 crc kubenswrapper[4858]: I1205 13:58:29.898931 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:29 crc kubenswrapper[4858]: E1205 13:58:29.899241 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:30 crc kubenswrapper[4858]: I1205 13:58:30.898539 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:30 crc kubenswrapper[4858]: E1205 13:58:30.898658 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:31 crc kubenswrapper[4858]: E1205 13:58:31.894879 4858 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 05 13:58:31 crc kubenswrapper[4858]: I1205 13:58:31.898293 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:31 crc kubenswrapper[4858]: I1205 13:58:31.898353 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:31 crc kubenswrapper[4858]: E1205 13:58:31.899317 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:31 crc kubenswrapper[4858]: I1205 13:58:31.899352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:31 crc kubenswrapper[4858]: E1205 13:58:31.899578 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:31 crc kubenswrapper[4858]: E1205 13:58:31.899704 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:32 crc kubenswrapper[4858]: E1205 13:58:32.008024 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.507218 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/1.log" Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.507728 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/0.log" Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.507774 4858 generic.go:334] "Generic (PLEG): container finished" podID="19dac4e8-493c-456c-b8ea-cc1e48b9867c" containerID="1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6" exitCode=1 Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.507803 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerDied","Data":"1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6"} Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.507850 4858 scope.go:117] "RemoveContainer" containerID="c07ee28495e9a9df2a5923d37f65114db8e7b2e6740e9f22e27e9cc1c651dfbf" Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.508320 4858 scope.go:117] "RemoveContainer" containerID="1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6" Dec 05 13:58:32 crc kubenswrapper[4858]: E1205 13:58:32.508488 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fjdj6_openshift-multus(19dac4e8-493c-456c-b8ea-cc1e48b9867c)\"" pod="openshift-multus/multus-fjdj6" podUID="19dac4e8-493c-456c-b8ea-cc1e48b9867c" Dec 05 13:58:32 crc kubenswrapper[4858]: I1205 13:58:32.898440 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:32 crc kubenswrapper[4858]: E1205 13:58:32.898562 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:33 crc kubenswrapper[4858]: I1205 13:58:33.512006 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/1.log" Dec 05 13:58:33 crc kubenswrapper[4858]: I1205 13:58:33.898675 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:33 crc kubenswrapper[4858]: I1205 13:58:33.898743 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:33 crc kubenswrapper[4858]: I1205 13:58:33.898787 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:33 crc kubenswrapper[4858]: E1205 13:58:33.898792 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:33 crc kubenswrapper[4858]: E1205 13:58:33.898858 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:33 crc kubenswrapper[4858]: E1205 13:58:33.898974 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:34 crc kubenswrapper[4858]: I1205 13:58:34.898359 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:34 crc kubenswrapper[4858]: E1205 13:58:34.898500 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:35 crc kubenswrapper[4858]: I1205 13:58:35.898339 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:35 crc kubenswrapper[4858]: I1205 13:58:35.898352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:35 crc kubenswrapper[4858]: I1205 13:58:35.898381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:35 crc kubenswrapper[4858]: E1205 13:58:35.898578 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:35 crc kubenswrapper[4858]: E1205 13:58:35.898660 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:35 crc kubenswrapper[4858]: E1205 13:58:35.898793 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:35 crc kubenswrapper[4858]: I1205 13:58:35.899533 4858 scope.go:117] "RemoveContainer" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" Dec 05 13:58:36 crc kubenswrapper[4858]: I1205 13:58:36.522095 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/3.log" Dec 05 13:58:36 crc kubenswrapper[4858]: I1205 13:58:36.525182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerStarted","Data":"611593e9406f66fd9b7a45a42975c96597f67d79f43cb9a6f559ac14d2bfb1f5"} Dec 05 13:58:36 crc kubenswrapper[4858]: I1205 13:58:36.525715 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:58:36 crc kubenswrapper[4858]: I1205 13:58:36.550948 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podStartSLOduration=104.550929347 podStartE2EDuration="1m44.550929347s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:36.54957441 +0000 UTC m=+125.097172549" watchObservedRunningTime="2025-12-05 13:58:36.550929347 +0000 UTC m=+125.098527496" Dec 05 13:58:36 crc kubenswrapper[4858]: I1205 13:58:36.861480 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5jh87"] Dec 05 13:58:36 crc kubenswrapper[4858]: I1205 13:58:36.861845 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:36 crc kubenswrapper[4858]: E1205 13:58:36.861935 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:37 crc kubenswrapper[4858]: E1205 13:58:37.009093 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 13:58:37 crc kubenswrapper[4858]: I1205 13:58:37.898893 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:37 crc kubenswrapper[4858]: E1205 13:58:37.899038 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:37 crc kubenswrapper[4858]: I1205 13:58:37.899261 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:37 crc kubenswrapper[4858]: E1205 13:58:37.899331 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:37 crc kubenswrapper[4858]: I1205 13:58:37.899484 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:37 crc kubenswrapper[4858]: E1205 13:58:37.899549 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:38 crc kubenswrapper[4858]: I1205 13:58:38.898629 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:38 crc kubenswrapper[4858]: E1205 13:58:38.898763 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:39 crc kubenswrapper[4858]: I1205 13:58:39.898845 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:39 crc kubenswrapper[4858]: I1205 13:58:39.898946 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:39 crc kubenswrapper[4858]: E1205 13:58:39.899086 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:39 crc kubenswrapper[4858]: E1205 13:58:39.898968 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:39 crc kubenswrapper[4858]: I1205 13:58:39.899244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:39 crc kubenswrapper[4858]: E1205 13:58:39.899310 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:40 crc kubenswrapper[4858]: I1205 13:58:40.898572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:40 crc kubenswrapper[4858]: E1205 13:58:40.899002 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:41 crc kubenswrapper[4858]: I1205 13:58:41.899016 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:41 crc kubenswrapper[4858]: E1205 13:58:41.899505 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:41 crc kubenswrapper[4858]: I1205 13:58:41.899652 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:41 crc kubenswrapper[4858]: I1205 13:58:41.899679 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:41 crc kubenswrapper[4858]: E1205 13:58:41.899978 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:41 crc kubenswrapper[4858]: E1205 13:58:41.899889 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:42 crc kubenswrapper[4858]: E1205 13:58:42.014854 4858 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 05 13:58:42 crc kubenswrapper[4858]: I1205 13:58:42.898246 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:42 crc kubenswrapper[4858]: E1205 13:58:42.898509 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:42 crc kubenswrapper[4858]: I1205 13:58:42.898625 4858 scope.go:117] "RemoveContainer" containerID="1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6" Dec 05 13:58:43 crc kubenswrapper[4858]: I1205 13:58:43.545395 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/1.log" Dec 05 13:58:43 crc kubenswrapper[4858]: I1205 13:58:43.545451 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerStarted","Data":"bc95bceb703d4245508b3fa427ca29bcfe32dd8543a74a22f2f8c84ce26f20ab"} Dec 05 13:58:43 crc kubenswrapper[4858]: I1205 13:58:43.898558 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:43 crc kubenswrapper[4858]: I1205 13:58:43.898603 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:43 crc kubenswrapper[4858]: I1205 13:58:43.898567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:43 crc kubenswrapper[4858]: E1205 13:58:43.898737 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:43 crc kubenswrapper[4858]: E1205 13:58:43.898788 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:43 crc kubenswrapper[4858]: E1205 13:58:43.898883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:44 crc kubenswrapper[4858]: I1205 13:58:44.898654 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:44 crc kubenswrapper[4858]: E1205 13:58:44.898777 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:45 crc kubenswrapper[4858]: I1205 13:58:45.898588 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:45 crc kubenswrapper[4858]: E1205 13:58:45.898703 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 13:58:45 crc kubenswrapper[4858]: I1205 13:58:45.898945 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:45 crc kubenswrapper[4858]: I1205 13:58:45.899011 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:45 crc kubenswrapper[4858]: E1205 13:58:45.899123 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 13:58:45 crc kubenswrapper[4858]: E1205 13:58:45.899162 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 13:58:46 crc kubenswrapper[4858]: I1205 13:58:46.899789 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:46 crc kubenswrapper[4858]: E1205 13:58:46.899949 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5jh87" podUID="6197c8ee-275b-44dd-b402-e4b8039c4997" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.899166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.900087 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.900501 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.903005 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.904077 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.904408 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 13:58:47 crc kubenswrapper[4858]: I1205 13:58:47.905502 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 13:58:48 crc kubenswrapper[4858]: I1205 13:58:48.898785 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:58:48 crc kubenswrapper[4858]: I1205 13:58:48.902098 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 13:58:48 crc kubenswrapper[4858]: I1205 13:58:48.902399 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.846083 4858 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.879863 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.880490 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.883208 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.883364 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.890982 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.891092 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.891254 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.891916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.892062 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.892064 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.892187 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.900688 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfbnh"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.901382 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.901943 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c7tvn"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.902717 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.902997 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.903462 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.910127 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.910312 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.910544 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913107 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913272 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913418 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913464 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913500 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913598 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913898 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.914064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.914208 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.913946 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.914113 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.914529 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.915964 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.924184 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.924661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.924907 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.926138 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.926649 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.929633 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.930148 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fgpw2"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.930409 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.930742 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.931389 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.931791 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.932639 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4zztz"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.933010 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-rzsvl"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.933260 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.933457 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.934006 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.934513 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.939722 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.940086 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qnpwj"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.940316 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.940574 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.941153 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.941381 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-dir\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47e4924d-05ae-4236-b6e8-4af7b98ce486-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945538 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-serving-cert\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-serving-cert\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945775 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945870 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-config\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.945794 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-26jzf"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-audit\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f47f6b4-2307-4660-b7d6-61a604ee2a81-audit-dir\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946171 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9khj\" (UniqueName: \"kubernetes.io/projected/4aded898-143e-40c9-99b8-5dd45d739d64-kube-api-access-s9khj\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946253 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8cbc4d-eadf-4949-9b00-760f67bd0442-serving-cert\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946323 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbz5\" (UniqueName: \"kubernetes.io/projected/47e4924d-05ae-4236-b6e8-4af7b98ce486-kube-api-access-sbbz5\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946548 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvsx\" (UniqueName: \"kubernetes.io/projected/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-kube-api-access-knvsx\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946858 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbfzw\" (UniqueName: \"kubernetes.io/projected/db8cbc4d-eadf-4949-9b00-760f67bd0442-kube-api-access-bbfzw\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.946940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f47f6b4-2307-4660-b7d6-61a604ee2a81-node-pullsecrets\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947009 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-etcd-serving-ca\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947078 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-client-ca\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947149 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947220 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdhv4\" (UniqueName: \"kubernetes.io/projected/20f59d96-5524-4b11-ac3b-b2634f94b6f7-kube-api-access-fdhv4\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947366 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-auth-proxy-config\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f59d96-5524-4b11-ac3b-b2634f94b6f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947539 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db8cbc4d-eadf-4949-9b00-760f67bd0442-available-featuregates\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947637 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpcp\" (UniqueName: \"kubernetes.io/projected/5f47f6b4-2307-4660-b7d6-61a604ee2a81-kube-api-access-5jpcp\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947750 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.947781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-image-import-ca\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-config\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xh5f\" (UniqueName: \"kubernetes.io/projected/2db6d150-e5c9-41b2-9289-2f6ee74c648b-kube-api-access-4xh5f\") pod \"downloads-7954f5f757-rzsvl\" (UID: \"2db6d150-e5c9-41b2-9289-2f6ee74c648b\") " pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-policies\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6dd\" (UniqueName: \"kubernetes.io/projected/065bd27a-40da-4591-82c4-2c1e8717b9d6-kube-api-access-mb6dd\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948381 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47e4924d-05ae-4236-b6e8-4af7b98ce486-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aded898-143e-40c9-99b8-5dd45d739d64-audit-dir\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948414 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948598 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76bb43-a079-4631-aace-ba93a4e04e4a-serving-cert\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948619 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e6696fd-dfa5-4863-ae4f-bac4c2379404-serving-cert\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948637 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-config\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-encryption-config\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-etcd-client\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-audit-policies\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-encryption-config\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-config\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-service-ca-bundle\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948919 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-etcd-client\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-config\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.948480 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xxk7s"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949278 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-machine-approver-tls\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-client-ca\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949649 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzpp2\" (UniqueName: \"kubernetes.io/projected/ee76bb43-a079-4631-aace-ba93a4e04e4a-kube-api-access-dzpp2\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949699 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47e4924d-05ae-4236-b6e8-4af7b98ce486-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.949715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stm5j\" (UniqueName: \"kubernetes.io/projected/6e6696fd-dfa5-4863-ae4f-bac4c2379404-kube-api-access-stm5j\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.950598 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.951049 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.951238 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.951514 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.952160 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.952622 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x25gp"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.953209 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.964975 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.966541 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.966766 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.971277 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q7jsq"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.971992 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.972423 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.972644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.973218 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.973530 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.973705 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.973833 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.983657 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.984854 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.984994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985087 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985109 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985177 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985261 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985316 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985379 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985432 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985498 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985564 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.985622 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.986978 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-trcq9"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.987387 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.988592 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.988877 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.989047 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.989573 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.990111 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.990353 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.990628 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.990784 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.990944 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.990967 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993000 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993184 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993466 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993600 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993752 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993778 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993882 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993915 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.993972 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994014 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994050 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994077 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994119 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994176 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994201 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994238 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994269 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994298 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994337 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994374 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994424 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994464 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994494 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994519 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994584 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994599 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994615 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994669 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994712 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.994438 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.996575 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.996954 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.997389 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.997742 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9"] Dec 05 13:58:50 crc kubenswrapper[4858]: I1205 13:58:50.998045 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.004095 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.007789 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.010004 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.013663 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfbnh"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.013879 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kmzj6"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.014459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.016177 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.016326 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.016766 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.017272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.017534 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.018259 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.019293 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.021544 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.024133 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.037538 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.042385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.043455 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.045009 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.046698 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.047327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.048551 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-config\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050596 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-audit\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f47f6b4-2307-4660-b7d6-61a604ee2a81-audit-dir\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9khj\" (UniqueName: \"kubernetes.io/projected/4aded898-143e-40c9-99b8-5dd45d739d64-kube-api-access-s9khj\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8cbc4d-eadf-4949-9b00-760f67bd0442-serving-cert\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbz5\" (UniqueName: \"kubernetes.io/projected/47e4924d-05ae-4236-b6e8-4af7b98ce486-kube-api-access-sbbz5\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5x77\" (UniqueName: \"kubernetes.io/projected/313be014-d206-4d8a-a459-8f1a34bb4e7a-kube-api-access-l5x77\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvsx\" (UniqueName: \"kubernetes.io/projected/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-kube-api-access-knvsx\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbfzw\" (UniqueName: \"kubernetes.io/projected/db8cbc4d-eadf-4949-9b00-760f67bd0442-kube-api-access-bbfzw\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.050929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f47f6b4-2307-4660-b7d6-61a604ee2a81-node-pullsecrets\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051032 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-etcd-serving-ca\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-client-ca\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051079 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-service-ca\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdhv4\" (UniqueName: \"kubernetes.io/projected/20f59d96-5524-4b11-ac3b-b2634f94b6f7-kube-api-access-fdhv4\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051205 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-auth-proxy-config\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051362 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f59d96-5524-4b11-ac3b-b2634f94b6f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051379 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db8cbc4d-eadf-4949-9b00-760f67bd0442-available-featuregates\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jpcp\" (UniqueName: \"kubernetes.io/projected/5f47f6b4-2307-4660-b7d6-61a604ee2a81-kube-api-access-5jpcp\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051524 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051547 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-image-import-ca\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-config\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051593 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xh5f\" (UniqueName: \"kubernetes.io/projected/2db6d150-e5c9-41b2-9289-2f6ee74c648b-kube-api-access-4xh5f\") pod \"downloads-7954f5f757-rzsvl\" (UID: \"2db6d150-e5c9-41b2-9289-2f6ee74c648b\") " pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051717 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-policies\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6dd\" (UniqueName: \"kubernetes.io/projected/065bd27a-40da-4591-82c4-2c1e8717b9d6-kube-api-access-mb6dd\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051812 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47e4924d-05ae-4236-b6e8-4af7b98ce486-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051851 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aded898-143e-40c9-99b8-5dd45d739d64-audit-dir\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051877 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.051897 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76bb43-a079-4631-aace-ba93a4e04e4a-serving-cert\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.052561 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjc62\" (UniqueName: \"kubernetes.io/projected/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-kube-api-access-xjc62\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053164 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e6696fd-dfa5-4863-ae4f-bac4c2379404-serving-cert\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/313be014-d206-4d8a-a459-8f1a34bb4e7a-serving-cert\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053212 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-config\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-encryption-config\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-etcd-client\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-audit-policies\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-encryption-config\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053468 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-config\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-service-ca-bundle\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053531 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-etcd-client\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-config\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-config\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-machine-approver-tls\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-client-ca\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.053689 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpp2\" (UniqueName: \"kubernetes.io/projected/ee76bb43-a079-4631-aace-ba93a4e04e4a-kube-api-access-dzpp2\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.082020 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.082559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-audit\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.082619 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f47f6b4-2307-4660-b7d6-61a604ee2a81-audit-dir\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.082648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-config\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.099680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.100355 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.101611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.102686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.103229 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.103924 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-client-ca\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.104312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-trusted-ca-bundle\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.107802 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.108395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e6696fd-dfa5-4863-ae4f-bac4c2379404-serving-cert\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.109294 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f59d96-5524-4b11-ac3b-b2634f94b6f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.109382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f47f6b4-2307-4660-b7d6-61a604ee2a81-node-pullsecrets\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.109629 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-image-import-ca\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.110056 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-etcd-serving-ca\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.111012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8cbc4d-eadf-4949-9b00-760f67bd0442-serving-cert\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.111606 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-config\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.111848 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db8cbc4d-eadf-4949-9b00-760f67bd0442-available-featuregates\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.112097 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.112657 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.112799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aded898-143e-40c9-99b8-5dd45d739d64-audit-policies\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.112932 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-client\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47e4924d-05ae-4236-b6e8-4af7b98ce486-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stm5j\" (UniqueName: \"kubernetes.io/projected/6e6696fd-dfa5-4863-ae4f-bac4c2379404-kube-api-access-stm5j\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-dir\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47e4924d-05ae-4236-b6e8-4af7b98ce486-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-serving-cert\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-ca\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113548 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-dir\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f47f6b4-2307-4660-b7d6-61a604ee2a81-config\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-serving-cert\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.114905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.115306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-policies\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.115717 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47e4924d-05ae-4236-b6e8-4af7b98ce486-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.113516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.115806 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.116108 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-encryption-config\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.116683 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.116808 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qbn5"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.116869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aded898-143e-40c9-99b8-5dd45d739d64-audit-dir\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.117459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.117626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.118031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-serving-cert\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.118368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.118602 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.122574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-client-ca\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.123047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-auth-proxy-config\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.123620 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.124200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e6696fd-dfa5-4863-ae4f-bac4c2379404-service-ca-bundle\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.130424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-etcd-client\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.130659 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.130939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-config\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.130982 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.067150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-config\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.131304 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c7tvn"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.131334 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.131585 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.131804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76bb43-a079-4631-aace-ba93a4e04e4a-serving-cert\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.133705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.135873 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/47e4924d-05ae-4236-b6e8-4af7b98ce486-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.136438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-encryption-config\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.136490 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.137119 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.139168 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.140304 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.140776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.140801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f47f6b4-2307-4660-b7d6-61a604ee2a81-serving-cert\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.142167 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.143131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4aded898-143e-40c9-99b8-5dd45d739d64-etcd-client\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.143472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-machine-approver-tls\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.143493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.146339 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.146853 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.147153 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.147667 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.148343 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rzsvl"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.150505 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.150680 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4zztz"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.152266 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9qgzs"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.152882 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.153205 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.153858 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.154071 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8x88"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.154600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.155071 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zrzh2"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.155489 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.156334 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.162507 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5c95q"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.163146 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.164269 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l27jv"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.164996 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.166215 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.166308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.166955 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fgpw2"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.168179 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qnpwj"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.169400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-26jzf"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.170474 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.173608 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.175518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.178243 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q7jsq"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.179231 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.192309 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.192568 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.209934 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x25gp"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.210013 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.213507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-trcq9"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-client\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-ca\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5x77\" (UniqueName: \"kubernetes.io/projected/313be014-d206-4d8a-a459-8f1a34bb4e7a-kube-api-access-l5x77\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-service-ca\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214593 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjc62\" (UniqueName: \"kubernetes.io/projected/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-kube-api-access-xjc62\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/313be014-d206-4d8a-a459-8f1a34bb4e7a-serving-cert\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.214691 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-config\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.215644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-config\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.215774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-service-ca\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.216154 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-ca\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.216628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.219205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.219930 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.221072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/313be014-d206-4d8a-a459-8f1a34bb4e7a-serving-cert\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.221305 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.221363 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.222958 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.223166 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/313be014-d206-4d8a-a459-8f1a34bb4e7a-etcd-client\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.224106 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.231730 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.231763 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.232915 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.234330 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9qgzs"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.236021 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qbn5"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.237111 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.238163 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5c95q"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.239524 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.242278 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.242549 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.243843 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8x88"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.245016 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.246544 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xxk7s"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.247771 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.249599 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l27jv"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.250610 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9dl2k"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.251358 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.252235 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9dl2k"] Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.262473 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.283389 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.303581 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.322832 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.342683 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.362266 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.382594 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.403419 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.422462 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.442716 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.462707 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.483698 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.502981 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.523274 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.543417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.562603 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.583266 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.602934 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.642719 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.661944 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.682180 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.703403 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.722748 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.749451 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.762902 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.782811 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.803388 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.822653 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.863001 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.882530 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.904285 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.922427 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.943404 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.964271 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 13:58:51 crc kubenswrapper[4858]: I1205 13:58:51.982785 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.003253 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.022782 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.043877 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.068577 4858 request.go:700] Waited for 1.016475582s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.073236 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.105123 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6dd\" (UniqueName: \"kubernetes.io/projected/065bd27a-40da-4591-82c4-2c1e8717b9d6-kube-api-access-mb6dd\") pod \"oauth-openshift-558db77b4-4zztz\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.120311 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbz5\" (UniqueName: \"kubernetes.io/projected/47e4924d-05ae-4236-b6e8-4af7b98ce486-kube-api-access-sbbz5\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.136235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvsx\" (UniqueName: \"kubernetes.io/projected/0bd8b721-b4f7-4be5-bcc8-518c65097fa1-kube-api-access-knvsx\") pod \"machine-approver-56656f9798-n6wsw\" (UID: \"0bd8b721-b4f7-4be5-bcc8-518c65097fa1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.147418 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.159137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbfzw\" (UniqueName: \"kubernetes.io/projected/db8cbc4d-eadf-4949-9b00-760f67bd0442-kube-api-access-bbfzw\") pod \"openshift-config-operator-7777fb866f-h4k5m\" (UID: \"db8cbc4d-eadf-4949-9b00-760f67bd0442\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.184182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xh5f\" (UniqueName: \"kubernetes.io/projected/2db6d150-e5c9-41b2-9289-2f6ee74c648b-kube-api-access-4xh5f\") pod \"downloads-7954f5f757-rzsvl\" (UID: \"2db6d150-e5c9-41b2-9289-2f6ee74c648b\") " pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.196804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpp2\" (UniqueName: \"kubernetes.io/projected/ee76bb43-a079-4631-aace-ba93a4e04e4a-kube-api-access-dzpp2\") pod \"controller-manager-879f6c89f-wfbnh\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.202928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.217406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdhv4\" (UniqueName: \"kubernetes.io/projected/20f59d96-5524-4b11-ac3b-b2634f94b6f7-kube-api-access-fdhv4\") pod \"route-controller-manager-6576b87f9c-r2zjn\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.235880 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jpcp\" (UniqueName: \"kubernetes.io/projected/5f47f6b4-2307-4660-b7d6-61a604ee2a81-kube-api-access-5jpcp\") pod \"apiserver-76f77b778f-c7tvn\" (UID: \"5f47f6b4-2307-4660-b7d6-61a604ee2a81\") " pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.237510 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.257973 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stm5j\" (UniqueName: \"kubernetes.io/projected/6e6696fd-dfa5-4863-ae4f-bac4c2379404-kube-api-access-stm5j\") pod \"authentication-operator-69f744f599-fgpw2\" (UID: \"6e6696fd-dfa5-4863-ae4f-bac4c2379404\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.278560 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47e4924d-05ae-4236-b6e8-4af7b98ce486-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5r9lh\" (UID: \"47e4924d-05ae-4236-b6e8-4af7b98ce486\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.282914 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.305003 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.322219 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.332053 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.334373 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.342516 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.363309 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.382984 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.403333 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.418169 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.422691 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.427166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.443199 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.463391 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.477938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.501863 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9khj\" (UniqueName: \"kubernetes.io/projected/4aded898-143e-40c9-99b8-5dd45d739d64-kube-api-access-s9khj\") pod \"apiserver-7bbb656c7d-m96p9\" (UID: \"4aded898-143e-40c9-99b8-5dd45d739d64\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.502544 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.514701 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.522985 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.542377 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.563633 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.579159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" event={"ID":"0bd8b721-b4f7-4be5-bcc8-518c65097fa1","Type":"ContainerStarted","Data":"d2369462e9863584ae2236a52318ba346c4ecb9e7998758b78491980d004e9a9"} Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.583297 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.606746 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.625462 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.642954 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.663033 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.693790 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.693959 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.702402 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.722253 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.743388 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.762803 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.783153 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.802595 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.822029 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.842574 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.862057 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.883265 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.903009 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.922812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.942664 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.963101 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 13:58:52 crc kubenswrapper[4858]: I1205 13:58:52.982986 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.003242 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.006322 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.007374 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.015843 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fgpw2"] Dec 05 13:58:53 crc kubenswrapper[4858]: W1205 13:58:53.016309 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47e4924d_05ae_4236_b6e8_4af7b98ce486.slice/crio-c21d75ed0581f5f1c5c1d07ba2f6e7c4896b9b3d66c6942b5bdb744a4fff68a7 WatchSource:0}: Error finding container c21d75ed0581f5f1c5c1d07ba2f6e7c4896b9b3d66c6942b5bdb744a4fff68a7: Status 404 returned error can't find the container with id c21d75ed0581f5f1c5c1d07ba2f6e7c4896b9b3d66c6942b5bdb744a4fff68a7 Dec 05 13:58:53 crc kubenswrapper[4858]: W1205 13:58:53.016992 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb8cbc4d_eadf_4949_9b00_760f67bd0442.slice/crio-512ff496ceb3ef52caa0ea38ba7d576d2c2ad19273d9288d353d6c6ba4592033 WatchSource:0}: Error finding container 512ff496ceb3ef52caa0ea38ba7d576d2c2ad19273d9288d353d6c6ba4592033: Status 404 returned error can't find the container with id 512ff496ceb3ef52caa0ea38ba7d576d2c2ad19273d9288d353d6c6ba4592033 Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.018898 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.021086 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-rzsvl"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.024440 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.026417 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfbnh"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.031923 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4zztz"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.033484 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-c7tvn"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.042729 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 13:58:53 crc kubenswrapper[4858]: W1205 13:58:53.049333 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee76bb43_a079_4631_aace_ba93a4e04e4a.slice/crio-d8d183dafc2eddc607bbee74dee04fc054eae9e3a8eb88abd726e00cf3948b04 WatchSource:0}: Error finding container d8d183dafc2eddc607bbee74dee04fc054eae9e3a8eb88abd726e00cf3948b04: Status 404 returned error can't find the container with id d8d183dafc2eddc607bbee74dee04fc054eae9e3a8eb88abd726e00cf3948b04 Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.064797 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 13:58:53 crc kubenswrapper[4858]: W1205 13:58:53.077442 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db6d150_e5c9_41b2_9289_2f6ee74c648b.slice/crio-e0e4297b411d5f880c2e40194f97af7b6bf4fa7c8cb72164b946d082c4565706 WatchSource:0}: Error finding container e0e4297b411d5f880c2e40194f97af7b6bf4fa7c8cb72164b946d082c4565706: Status 404 returned error can't find the container with id e0e4297b411d5f880c2e40194f97af7b6bf4fa7c8cb72164b946d082c4565706 Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.081487 4858 request.go:700] Waited for 1.86542109s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.115809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjc62\" (UniqueName: \"kubernetes.io/projected/8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0-kube-api-access-xjc62\") pod \"openshift-controller-manager-operator-756b6f6bc6-s5cwr\" (UID: \"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.120285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5x77\" (UniqueName: \"kubernetes.io/projected/313be014-d206-4d8a-a459-8f1a34bb4e7a-kube-api-access-l5x77\") pod \"etcd-operator-b45778765-qnpwj\" (UID: \"313be014-d206-4d8a-a459-8f1a34bb4e7a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.122678 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.142460 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.164870 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.174932 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.182535 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.309130 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.310624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.311048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-metrics-certs\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.311310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57218a3f-09f7-4d6a-a308-b17e118f46ae-metrics-tls\") pod \"dns-operator-744455d44c-q7jsq\" (UID: \"57218a3f-09f7-4d6a-a308-b17e118f46ae\") " pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.311581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab28fcbb-545b-4e1a-9c37-b3db4335917c-config\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.311736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97821ca1-2978-4fcf-a6cc-fdf101794a17-metrics-tls\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.311846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlr68\" (UniqueName: \"kubernetes.io/projected/97821ca1-2978-4fcf-a6cc-fdf101794a17-kube-api-access-nlr68\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.311944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9nlr\" (UniqueName: \"kubernetes.io/projected/2e53905c-348b-4d4b-897d-c2e47d3b8562-kube-api-access-s9nlr\") pod \"cluster-samples-operator-665b6dd947-fv2vm\" (UID: \"2e53905c-348b-4d4b-897d-c2e47d3b8562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312247 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17d98864-f8cf-4f61-9707-30871521a9f2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312353 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w224t\" (UniqueName: \"kubernetes.io/projected/86eb64e6-0d80-466b-842d-1d464e1a7fa9-kube-api-access-w224t\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3418e2ae-f14a-42c7-88b7-b46764bd9032-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fq6mq\" (UID: \"3418e2ae-f14a-42c7-88b7-b46764bd9032\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312576 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxd52\" (UniqueName: \"kubernetes.io/projected/3418e2ae-f14a-42c7-88b7-b46764bd9032-kube-api-access-dxd52\") pod \"control-plane-machine-set-operator-78cbb6b69f-fq6mq\" (UID: \"3418e2ae-f14a-42c7-88b7-b46764bd9032\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab28fcbb-545b-4e1a-9c37-b3db4335917c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97821ca1-2978-4fcf-a6cc-fdf101794a17-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.312905 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xk7z\" (UniqueName: \"kubernetes.io/projected/57218a3f-09f7-4d6a-a308-b17e118f46ae-kube-api-access-5xk7z\") pod \"dns-operator-744455d44c-q7jsq\" (UID: \"57218a3f-09f7-4d6a-a308-b17e118f46ae\") " pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313006 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cc0327d-c1d0-4177-9670-b53e2e205cbc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a671e973-ca0c-4692-b7ee-fbd76d2c252f-config\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313311 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17d98864-f8cf-4f61-9707-30871521a9f2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313428 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtgxc\" (UniqueName: \"kubernetes.io/projected/9cc0327d-c1d0-4177-9670-b53e2e205cbc-kube-api-access-mtgxc\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313557 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-trusted-ca\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97821ca1-2978-4fcf-a6cc-fdf101794a17-trusted-ca\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-default-certificate\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.313893 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-stats-auth\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43c50736-3414-483f-8104-cefb05d4552c-service-ca-bundle\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/053fb3f3-4898-45f5-abc7-0a14c273bd5b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44rh\" (UniqueName: \"kubernetes.io/projected/43c50736-3414-483f-8104-cefb05d4552c-kube-api-access-h44rh\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053fb3f3-4898-45f5-abc7-0a14c273bd5b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a671e973-ca0c-4692-b7ee-fbd76d2c252f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053fb3f3-4898-45f5-abc7-0a14c273bd5b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314746 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssh9\" (UniqueName: \"kubernetes.io/projected/8a09c06e-57de-4891-b165-b1b42308b23b-kube-api-access-9ssh9\") pod \"migrator-59844c95c7-vfjgg\" (UID: \"8a09c06e-57de-4891-b165-b1b42308b23b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.314942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-registry-certificates\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86eb64e6-0d80-466b-842d-1d464e1a7fa9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315136 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab28fcbb-545b-4e1a-9c37-b3db4335917c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315233 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a671e973-ca0c-4692-b7ee-fbd76d2c252f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-bound-sa-token\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4t4\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-kube-api-access-nb4t4\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb64e6-0d80-466b-842d-1d464e1a7fa9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.315781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e53905c-348b-4d4b-897d-c2e47d3b8562-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fv2vm\" (UID: \"2e53905c-348b-4d4b-897d-c2e47d3b8562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.318099 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:53.818082419 +0000 UTC m=+142.365680548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.320040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cc0327d-c1d0-4177-9670-b53e2e205cbc-proxy-tls\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.320185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-registry-tls\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421161 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.421423 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:53.921348647 +0000 UTC m=+142.468946786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61356f17-0b7f-4482-83f2-5a6d542a4e68-trusted-ca\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cedb2565-0837-4473-89e6-84269d6e3766-config-volume\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-csi-data-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-config\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxd52\" (UniqueName: \"kubernetes.io/projected/3418e2ae-f14a-42c7-88b7-b46764bd9032-kube-api-access-dxd52\") pod \"control-plane-machine-set-operator-78cbb6b69f-fq6mq\" (UID: \"3418e2ae-f14a-42c7-88b7-b46764bd9032\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17d98864-f8cf-4f61-9707-30871521a9f2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3418e2ae-f14a-42c7-88b7-b46764bd9032-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fq6mq\" (UID: \"3418e2ae-f14a-42c7-88b7-b46764bd9032\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab28fcbb-545b-4e1a-9c37-b3db4335917c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421974 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cc0327d-c1d0-4177-9670-b53e2e205cbc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.421997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a671e973-ca0c-4692-b7ee-fbd76d2c252f-config\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqkx\" (UniqueName: \"kubernetes.io/projected/fb636da4-8963-449c-adb8-8ba8d1a66d3b-kube-api-access-6bqkx\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17d98864-f8cf-4f61-9707-30871521a9f2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422111 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-config\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbtp\" (UniqueName: \"kubernetes.io/projected/1329b103-5d7b-492b-96ed-c7b5b10e8edd-kube-api-access-zpbtp\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97821ca1-2978-4fcf-a6cc-fdf101794a17-trusted-ca\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422210 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e6d32935-4d3d-43c9-b7c7-8735545d39ba-srv-cert\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43c50736-3414-483f-8104-cefb05d4552c-service-ca-bundle\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.422288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-registration-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.423145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cc0327d-c1d0-4177-9670-b53e2e205cbc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.423199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17d98864-f8cf-4f61-9707-30871521a9f2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.423807 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a671e973-ca0c-4692-b7ee-fbd76d2c252f-config\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.424414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43c50736-3414-483f-8104-cefb05d4552c-service-ca-bundle\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.424710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97821ca1-2978-4fcf-a6cc-fdf101794a17-trusted-ca\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.425445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcn8b\" (UniqueName: \"kubernetes.io/projected/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-kube-api-access-vcn8b\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.425500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d119a06b-0504-4a14-a82e-c8f877c6d01a-signing-cabundle\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.425548 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-serving-cert\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.425597 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42ae75c8-e3d2-4328-83ef-4d7279d05abd-webhook-cert\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.425621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrhz\" (UniqueName: \"kubernetes.io/projected/7224c6fe-8b26-4d04-b5be-20515e19eb5b-kube-api-access-dqrhz\") pod \"multus-admission-controller-857f4d67dd-6qbn5\" (UID: \"7224c6fe-8b26-4d04-b5be-20515e19eb5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.425694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a671e973-ca0c-4692-b7ee-fbd76d2c252f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42ae75c8-e3d2-4328-83ef-4d7279d05abd-tmpfs\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqrh\" (UniqueName: \"kubernetes.io/projected/50cce18d-88c6-44b7-9a7d-9a9734a2eba2-kube-api-access-ftqrh\") pod \"package-server-manager-789f6589d5-hsprq\" (UID: \"50cce18d-88c6-44b7-9a7d-9a9734a2eba2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61356f17-0b7f-4482-83f2-5a6d542a4e68-serving-cert\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhpp\" (UniqueName: \"kubernetes.io/projected/61356f17-0b7f-4482-83f2-5a6d542a4e68-kube-api-access-9xhpp\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv6qx\" (UniqueName: \"kubernetes.io/projected/234d955e-a1e1-4b72-b1d6-da4a4f74f82d-kube-api-access-jv6qx\") pod \"ingress-canary-9dl2k\" (UID: \"234d955e-a1e1-4b72-b1d6-da4a4f74f82d\") " pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426417 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e6d32935-4d3d-43c9-b7c7-8735545d39ba-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7224c6fe-8b26-4d04-b5be-20515e19eb5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qbn5\" (UID: \"7224c6fe-8b26-4d04-b5be-20515e19eb5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426812 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-oauth-config\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bpf\" (UniqueName: \"kubernetes.io/projected/80f5ad75-7da0-493a-9fd3-eb605b50e650-kube-api-access-67bpf\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jqj\" (UniqueName: \"kubernetes.io/projected/521a1948-1758-4148-be85-f3d91f04aac9-kube-api-access-d7jqj\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.426958 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-registry-certificates\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.427009 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86eb64e6-0d80-466b-842d-1d464e1a7fa9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.427035 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb636da4-8963-449c-adb8-8ba8d1a66d3b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.427054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/234d955e-a1e1-4b72-b1d6-da4a4f74f82d-cert\") pod \"ingress-canary-9dl2k\" (UID: \"234d955e-a1e1-4b72-b1d6-da4a4f74f82d\") " pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.427104 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-bound-sa-token\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.427128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2gd\" (UniqueName: \"kubernetes.io/projected/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-kube-api-access-rv2gd\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.427360 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-socket-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428315 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4t4\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-kube-api-access-nb4t4\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428354 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-mountpoint-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428399 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61356f17-0b7f-4482-83f2-5a6d542a4e68-config\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e53905c-348b-4d4b-897d-c2e47d3b8562-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fv2vm\" (UID: \"2e53905c-348b-4d4b-897d-c2e47d3b8562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb636da4-8963-449c-adb8-8ba8d1a66d3b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428478 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-registry-tls\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqft\" (UniqueName: \"kubernetes.io/projected/698a7180-694e-4712-8087-afa8fd7d6d4f-kube-api-access-rcqft\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-images\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428566 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17d98864-f8cf-4f61-9707-30871521a9f2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428569 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/daefbd61-f897-46b5-9e48-d0f03f81aff0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80f5ad75-7da0-493a-9fd3-eb605b50e650-serving-cert\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf7b8\" (UniqueName: \"kubernetes.io/projected/cedb2565-0837-4473-89e6-84269d6e3766-kube-api-access-wf7b8\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2950ccec-35ea-4679-8cf6-1a67f52264b4-srv-cert\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab28fcbb-545b-4e1a-9c37-b3db4335917c-config\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9nlr\" (UniqueName: \"kubernetes.io/projected/2e53905c-348b-4d4b-897d-c2e47d3b8562-kube-api-access-s9nlr\") pod \"cluster-samples-operator-665b6dd947-fv2vm\" (UID: \"2e53905c-348b-4d4b-897d-c2e47d3b8562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97821ca1-2978-4fcf-a6cc-fdf101794a17-metrics-tls\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlr68\" (UniqueName: \"kubernetes.io/projected/97821ca1-2978-4fcf-a6cc-fdf101794a17-kube-api-access-nlr68\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-trusted-ca-bundle\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f5ad75-7da0-493a-9fd3-eb605b50e650-config\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-registry-certificates\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428843 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w224t\" (UniqueName: \"kubernetes.io/projected/86eb64e6-0d80-466b-842d-1d464e1a7fa9-kube-api-access-w224t\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428863 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97821ca1-2978-4fcf-a6cc-fdf101794a17-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428880 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xk7z\" (UniqueName: \"kubernetes.io/projected/57218a3f-09f7-4d6a-a308-b17e118f46ae-kube-api-access-5xk7z\") pod \"dns-operator-744455d44c-q7jsq\" (UID: \"57218a3f-09f7-4d6a-a308-b17e118f46ae\") " pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428897 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv2p8\" (UniqueName: \"kubernetes.io/projected/2950ccec-35ea-4679-8cf6-1a67f52264b4-kube-api-access-cv2p8\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtgxc\" (UniqueName: \"kubernetes.io/projected/9cc0327d-c1d0-4177-9670-b53e2e205cbc-kube-api-access-mtgxc\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428931 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2950ccec-35ea-4679-8cf6-1a67f52264b4-profile-collector-cert\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428946 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-default-certificate\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-trusted-ca\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-stats-auth\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.428999 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/053fb3f3-4898-45f5-abc7-0a14c273bd5b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429015 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95eba5b0-94bb-4594-a49e-ca21538ef39d-metrics-tls\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429033 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44rh\" (UniqueName: \"kubernetes.io/projected/43c50736-3414-483f-8104-cefb05d4552c-kube-api-access-h44rh\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-service-ca\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429065 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-oauth-serving-cert\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429081 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053fb3f3-4898-45f5-abc7-0a14c273bd5b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429098 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50cce18d-88c6-44b7-9a7d-9a9734a2eba2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hsprq\" (UID: \"50cce18d-88c6-44b7-9a7d-9a9734a2eba2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d119a06b-0504-4a14-a82e-c8f877c6d01a-signing-key\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429133 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053fb3f3-4898-45f5-abc7-0a14c273bd5b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429158 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ssh9\" (UniqueName: \"kubernetes.io/projected/8a09c06e-57de-4891-b165-b1b42308b23b-kube-api-access-9ssh9\") pod \"migrator-59844c95c7-vfjgg\" (UID: \"8a09c06e-57de-4891-b165-b1b42308b23b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42ae75c8-e3d2-4328-83ef-4d7279d05abd-apiservice-cert\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a671e973-ca0c-4692-b7ee-fbd76d2c252f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab28fcbb-545b-4e1a-9c37-b3db4335917c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-plugins-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429280 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86c6k\" (UniqueName: \"kubernetes.io/projected/42ae75c8-e3d2-4328-83ef-4d7279d05abd-kube-api-access-86c6k\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429295 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx7r5\" (UniqueName: \"kubernetes.io/projected/d119a06b-0504-4a14-a82e-c8f877c6d01a-kube-api-access-bx7r5\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429314 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb64e6-0d80-466b-842d-1d464e1a7fa9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmjs5\" (UniqueName: \"kubernetes.io/projected/daefbd61-f897-46b5-9e48-d0f03f81aff0-kube-api-access-vmjs5\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cc0327d-c1d0-4177-9670-b53e2e205cbc-proxy-tls\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbcdh\" (UniqueName: \"kubernetes.io/projected/e6d32935-4d3d-43c9-b7c7-8735545d39ba-kube-api-access-sbcdh\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429375 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/daefbd61-f897-46b5-9e48-d0f03f81aff0-proxy-tls\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqs2r\" (UniqueName: \"kubernetes.io/projected/95eba5b0-94bb-4594-a49e-ca21538ef39d-kube-api-access-sqs2r\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/698a7180-694e-4712-8087-afa8fd7d6d4f-node-bootstrap-token\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429423 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/698a7180-694e-4712-8087-afa8fd7d6d4f-certs\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-metrics-certs\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429454 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57218a3f-09f7-4d6a-a308-b17e118f46ae-metrics-tls\") pod \"dns-operator-744455d44c-q7jsq\" (UID: \"57218a3f-09f7-4d6a-a308-b17e118f46ae\") " pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429469 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429485 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95eba5b0-94bb-4594-a49e-ca21538ef39d-config-volume\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429506 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/daefbd61-f897-46b5-9e48-d0f03f81aff0-images\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.429521 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cedb2565-0837-4473-89e6-84269d6e3766-secret-volume\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.430127 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:53.930098847 +0000 UTC m=+142.477696986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.433096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-stats-auth\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.433404 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab28fcbb-545b-4e1a-9c37-b3db4335917c-config\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.435243 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2e53905c-348b-4d4b-897d-c2e47d3b8562-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fv2vm\" (UID: \"2e53905c-348b-4d4b-897d-c2e47d3b8562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.437221 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab28fcbb-545b-4e1a-9c37-b3db4335917c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.437644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3418e2ae-f14a-42c7-88b7-b46764bd9032-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fq6mq\" (UID: \"3418e2ae-f14a-42c7-88b7-b46764bd9032\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.438324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-registry-tls\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.442556 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57218a3f-09f7-4d6a-a308-b17e118f46ae-metrics-tls\") pod \"dns-operator-744455d44c-q7jsq\" (UID: \"57218a3f-09f7-4d6a-a308-b17e118f46ae\") " pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.446064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9cc0327d-c1d0-4177-9670-b53e2e205cbc-proxy-tls\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.446456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-trusted-ca\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.447360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-metrics-certs\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.448020 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86eb64e6-0d80-466b-842d-1d464e1a7fa9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.448498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053fb3f3-4898-45f5-abc7-0a14c273bd5b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.449508 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a671e973-ca0c-4692-b7ee-fbd76d2c252f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.449677 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86eb64e6-0d80-466b-842d-1d464e1a7fa9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.450488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/97821ca1-2978-4fcf-a6cc-fdf101794a17-metrics-tls\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.461664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/43c50736-3414-483f-8104-cefb05d4552c-default-certificate\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.462411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053fb3f3-4898-45f5-abc7-0a14c273bd5b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.470756 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxd52\" (UniqueName: \"kubernetes.io/projected/3418e2ae-f14a-42c7-88b7-b46764bd9032-kube-api-access-dxd52\") pod \"control-plane-machine-set-operator-78cbb6b69f-fq6mq\" (UID: \"3418e2ae-f14a-42c7-88b7-b46764bd9032\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.483129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab28fcbb-545b-4e1a-9c37-b3db4335917c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xgwpc\" (UID: \"ab28fcbb-545b-4e1a-9c37-b3db4335917c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.522303 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-bound-sa-token\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.525008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4t4\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-kube-api-access-nb4t4\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.531238 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.031211875 +0000 UTC m=+142.578810014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95eba5b0-94bb-4594-a49e-ca21538ef39d-metrics-tls\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-service-ca\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531340 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-oauth-serving-cert\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50cce18d-88c6-44b7-9a7d-9a9734a2eba2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hsprq\" (UID: \"50cce18d-88c6-44b7-9a7d-9a9734a2eba2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531376 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d119a06b-0504-4a14-a82e-c8f877c6d01a-signing-key\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42ae75c8-e3d2-4328-83ef-4d7279d05abd-apiservice-cert\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531430 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-plugins-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86c6k\" (UniqueName: \"kubernetes.io/projected/42ae75c8-e3d2-4328-83ef-4d7279d05abd-kube-api-access-86c6k\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx7r5\" (UniqueName: \"kubernetes.io/projected/d119a06b-0504-4a14-a82e-c8f877c6d01a-kube-api-access-bx7r5\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmjs5\" (UniqueName: \"kubernetes.io/projected/daefbd61-f897-46b5-9e48-d0f03f81aff0-kube-api-access-vmjs5\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbcdh\" (UniqueName: \"kubernetes.io/projected/e6d32935-4d3d-43c9-b7c7-8735545d39ba-kube-api-access-sbcdh\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531550 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqs2r\" (UniqueName: \"kubernetes.io/projected/95eba5b0-94bb-4594-a49e-ca21538ef39d-kube-api-access-sqs2r\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/daefbd61-f897-46b5-9e48-d0f03f81aff0-proxy-tls\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531629 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/698a7180-694e-4712-8087-afa8fd7d6d4f-node-bootstrap-token\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/698a7180-694e-4712-8087-afa8fd7d6d4f-certs\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cedb2565-0837-4473-89e6-84269d6e3766-secret-volume\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95eba5b0-94bb-4594-a49e-ca21538ef39d-config-volume\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531725 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/daefbd61-f897-46b5-9e48-d0f03f81aff0-images\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61356f17-0b7f-4482-83f2-5a6d542a4e68-trusted-ca\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531760 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cedb2565-0837-4473-89e6-84269d6e3766-config-volume\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.531795 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-csi-data-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.533603 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/daefbd61-f897-46b5-9e48-d0f03f81aff0-images\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.533951 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-oauth-serving-cert\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.534630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cedb2565-0837-4473-89e6-84269d6e3766-config-volume\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.536174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61356f17-0b7f-4482-83f2-5a6d542a4e68-trusted-ca\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.537894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.538230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-plugins-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.538499 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.038485876 +0000 UTC m=+142.586084015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.538796 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-service-ca\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-config\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqkx\" (UniqueName: \"kubernetes.io/projected/fb636da4-8963-449c-adb8-8ba8d1a66d3b-kube-api-access-6bqkx\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539211 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbtp\" (UniqueName: \"kubernetes.io/projected/1329b103-5d7b-492b-96ed-c7b5b10e8edd-kube-api-access-zpbtp\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-config\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e6d32935-4d3d-43c9-b7c7-8735545d39ba-srv-cert\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-registration-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcn8b\" (UniqueName: \"kubernetes.io/projected/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-kube-api-access-vcn8b\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d119a06b-0504-4a14-a82e-c8f877c6d01a-signing-cabundle\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-serving-cert\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42ae75c8-e3d2-4328-83ef-4d7279d05abd-webhook-cert\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrhz\" (UniqueName: \"kubernetes.io/projected/7224c6fe-8b26-4d04-b5be-20515e19eb5b-kube-api-access-dqrhz\") pod \"multus-admission-controller-857f4d67dd-6qbn5\" (UID: \"7224c6fe-8b26-4d04-b5be-20515e19eb5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42ae75c8-e3d2-4328-83ef-4d7279d05abd-tmpfs\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftqrh\" (UniqueName: \"kubernetes.io/projected/50cce18d-88c6-44b7-9a7d-9a9734a2eba2-kube-api-access-ftqrh\") pod \"package-server-manager-789f6589d5-hsprq\" (UID: \"50cce18d-88c6-44b7-9a7d-9a9734a2eba2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539422 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61356f17-0b7f-4482-83f2-5a6d542a4e68-serving-cert\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539443 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhpp\" (UniqueName: \"kubernetes.io/projected/61356f17-0b7f-4482-83f2-5a6d542a4e68-kube-api-access-9xhpp\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv6qx\" (UniqueName: \"kubernetes.io/projected/234d955e-a1e1-4b72-b1d6-da4a4f74f82d-kube-api-access-jv6qx\") pod \"ingress-canary-9dl2k\" (UID: \"234d955e-a1e1-4b72-b1d6-da4a4f74f82d\") " pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539481 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e6d32935-4d3d-43c9-b7c7-8735545d39ba-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7224c6fe-8b26-4d04-b5be-20515e19eb5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qbn5\" (UID: \"7224c6fe-8b26-4d04-b5be-20515e19eb5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-oauth-config\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bpf\" (UniqueName: \"kubernetes.io/projected/80f5ad75-7da0-493a-9fd3-eb605b50e650-kube-api-access-67bpf\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539543 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jqj\" (UniqueName: \"kubernetes.io/projected/521a1948-1758-4148-be85-f3d91f04aac9-kube-api-access-d7jqj\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb636da4-8963-449c-adb8-8ba8d1a66d3b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539590 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/234d955e-a1e1-4b72-b1d6-da4a4f74f82d-cert\") pod \"ingress-canary-9dl2k\" (UID: \"234d955e-a1e1-4b72-b1d6-da4a4f74f82d\") " pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv2gd\" (UniqueName: \"kubernetes.io/projected/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-kube-api-access-rv2gd\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-socket-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539653 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-mountpoint-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539671 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61356f17-0b7f-4482-83f2-5a6d542a4e68-config\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539701 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb636da4-8963-449c-adb8-8ba8d1a66d3b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539723 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqft\" (UniqueName: \"kubernetes.io/projected/698a7180-694e-4712-8087-afa8fd7d6d4f-kube-api-access-rcqft\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539741 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-images\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539757 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/daefbd61-f897-46b5-9e48-d0f03f81aff0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539772 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80f5ad75-7da0-493a-9fd3-eb605b50e650-serving-cert\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539790 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf7b8\" (UniqueName: \"kubernetes.io/projected/cedb2565-0837-4473-89e6-84269d6e3766-kube-api-access-wf7b8\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539809 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2950ccec-35ea-4679-8cf6-1a67f52264b4-srv-cert\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-trusted-ca-bundle\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f5ad75-7da0-493a-9fd3-eb605b50e650-config\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539929 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv2p8\" (UniqueName: \"kubernetes.io/projected/2950ccec-35ea-4679-8cf6-1a67f52264b4-kube-api-access-cv2p8\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.539950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2950ccec-35ea-4679-8cf6-1a67f52264b4-profile-collector-cert\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.540486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95eba5b0-94bb-4594-a49e-ca21538ef39d-config-volume\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.544527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/50cce18d-88c6-44b7-9a7d-9a9734a2eba2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hsprq\" (UID: \"50cce18d-88c6-44b7-9a7d-9a9734a2eba2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.545057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb636da4-8963-449c-adb8-8ba8d1a66d3b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.549594 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d119a06b-0504-4a14-a82e-c8f877c6d01a-signing-key\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.551330 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-images\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.552198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-socket-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.552251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-mountpoint-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.555847 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61356f17-0b7f-4482-83f2-5a6d542a4e68-config\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.556033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-trusted-ca-bundle\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.556101 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d119a06b-0504-4a14-a82e-c8f877c6d01a-signing-cabundle\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.557134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80f5ad75-7da0-493a-9fd3-eb605b50e650-serving-cert\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.557728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-csi-data-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.557910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/521a1948-1758-4148-be85-f3d91f04aac9-registration-dir\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.558690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-config\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.558852 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/daefbd61-f897-46b5-9e48-d0f03f81aff0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.559092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-oauth-config\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.559657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/42ae75c8-e3d2-4328-83ef-4d7279d05abd-tmpfs\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.561686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/698a7180-694e-4712-8087-afa8fd7d6d4f-node-bootstrap-token\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.563607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/42ae75c8-e3d2-4328-83ef-4d7279d05abd-apiservice-cert\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.564506 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/053fb3f3-4898-45f5-abc7-0a14c273bd5b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rpkw2\" (UID: \"053fb3f3-4898-45f5-abc7-0a14c273bd5b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.568287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f5ad75-7da0-493a-9fd3-eb605b50e650-config\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.568679 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-config\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.568857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/daefbd61-f897-46b5-9e48-d0f03f81aff0-proxy-tls\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.569198 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44rh\" (UniqueName: \"kubernetes.io/projected/43c50736-3414-483f-8104-cefb05d4552c-kube-api-access-h44rh\") pod \"router-default-5444994796-kmzj6\" (UID: \"43c50736-3414-483f-8104-cefb05d4552c\") " pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.569577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cedb2565-0837-4473-89e6-84269d6e3766-secret-volume\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.570349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e6d32935-4d3d-43c9-b7c7-8735545d39ba-srv-cert\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.570722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb636da4-8963-449c-adb8-8ba8d1a66d3b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.571352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2950ccec-35ea-4679-8cf6-1a67f52264b4-srv-cert\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.571359 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/698a7180-694e-4712-8087-afa8fd7d6d4f-certs\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.571840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.572320 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e6d32935-4d3d-43c9-b7c7-8735545d39ba-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.572761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95eba5b0-94bb-4594-a49e-ca21538ef39d-metrics-tls\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.573135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7224c6fe-8b26-4d04-b5be-20515e19eb5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6qbn5\" (UID: \"7224c6fe-8b26-4d04-b5be-20515e19eb5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.573767 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/42ae75c8-e3d2-4328-83ef-4d7279d05abd-webhook-cert\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.574295 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/234d955e-a1e1-4b72-b1d6-da4a4f74f82d-cert\") pod \"ingress-canary-9dl2k\" (UID: \"234d955e-a1e1-4b72-b1d6-da4a4f74f82d\") " pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.575351 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-serving-cert\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.577389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61356f17-0b7f-4482-83f2-5a6d542a4e68-serving-cert\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.577608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.578018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2950ccec-35ea-4679-8cf6-1a67f52264b4-profile-collector-cert\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.592625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" event={"ID":"6e6696fd-dfa5-4863-ae4f-bac4c2379404","Type":"ContainerStarted","Data":"58d571ce2360f09c4c97f506e1ae78a990c75e358757459f4c39ce12d6d16573"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.592664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" event={"ID":"6e6696fd-dfa5-4863-ae4f-bac4c2379404","Type":"ContainerStarted","Data":"c3d54798478a03e83777ba958fc46eed2b4b217dd738dda02289fc23cf8bd68d"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.597682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rzsvl" event={"ID":"2db6d150-e5c9-41b2-9289-2f6ee74c648b","Type":"ContainerStarted","Data":"e0e4297b411d5f880c2e40194f97af7b6bf4fa7c8cb72164b946d082c4565706"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.602805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a671e973-ca0c-4692-b7ee-fbd76d2c252f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lxq22\" (UID: \"a671e973-ca0c-4692-b7ee-fbd76d2c252f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.609391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" event={"ID":"0bd8b721-b4f7-4be5-bcc8-518c65097fa1","Type":"ContainerStarted","Data":"44b157c5a495f154d2d69c18146510f05259ba46e10584868fd5eeb47251adaf"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.629339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" event={"ID":"db8cbc4d-eadf-4949-9b00-760f67bd0442","Type":"ContainerStarted","Data":"a4e33b4e6a60d0f45ac2a55bba376075c1b0a1bb5b2cc4f6067a09bf082482e5"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.629393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" event={"ID":"db8cbc4d-eadf-4949-9b00-760f67bd0442","Type":"ContainerStarted","Data":"512ff496ceb3ef52caa0ea38ba7d576d2c2ad19273d9288d353d6c6ba4592033"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.632131 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlr68\" (UniqueName: \"kubernetes.io/projected/97821ca1-2978-4fcf-a6cc-fdf101794a17-kube-api-access-nlr68\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.636234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" event={"ID":"47e4924d-05ae-4236-b6e8-4af7b98ce486","Type":"ContainerStarted","Data":"4c4d309339c8f3ae105bd0f89ca0382213a4a7b3fbcfff5699b121a58272ba60"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.636269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" event={"ID":"47e4924d-05ae-4236-b6e8-4af7b98ce486","Type":"ContainerStarted","Data":"c21d75ed0581f5f1c5c1d07ba2f6e7c4896b9b3d66c6942b5bdb744a4fff68a7"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.638365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" event={"ID":"4aded898-143e-40c9-99b8-5dd45d739d64","Type":"ContainerStarted","Data":"a61365b79b79d2e5022812a835c1650e8679637dc54bb6ab7d682629e028f570"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.638966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w224t\" (UniqueName: \"kubernetes.io/projected/86eb64e6-0d80-466b-842d-1d464e1a7fa9-kube-api-access-w224t\") pod \"openshift-apiserver-operator-796bbdcf4f-lw86k\" (UID: \"86eb64e6-0d80-466b-842d-1d464e1a7fa9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.640888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.641369 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.141355882 +0000 UTC m=+142.688954021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.641961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" event={"ID":"20f59d96-5524-4b11-ac3b-b2634f94b6f7","Type":"ContainerStarted","Data":"4773bc3f859946bfbac6df391c21116ff32b4b19a5f674f13371c0fd7523ba7e"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.641989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" event={"ID":"20f59d96-5524-4b11-ac3b-b2634f94b6f7","Type":"ContainerStarted","Data":"4625a5c9edbda55d4e196514fa721f238dc55bb648816c2b18164cf59969f374"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.642651 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.643663 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" event={"ID":"5f47f6b4-2307-4660-b7d6-61a604ee2a81","Type":"ContainerStarted","Data":"7f7b092925c63d6b7c156f36ca1cbb7024efec938f12dda693208373868c87bf"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.644336 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r2zjn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.644496 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.646132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" event={"ID":"ee76bb43-a079-4631-aace-ba93a4e04e4a","Type":"ContainerStarted","Data":"8becbb2396401ed0934e50dc005e80887958a9d2ea3aa1da13e5ae8d6958016d"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.646161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" event={"ID":"ee76bb43-a079-4631-aace-ba93a4e04e4a","Type":"ContainerStarted","Data":"d8d183dafc2eddc607bbee74dee04fc054eae9e3a8eb88abd726e00cf3948b04"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.648226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" event={"ID":"065bd27a-40da-4591-82c4-2c1e8717b9d6","Type":"ContainerStarted","Data":"3f4f489c878a690e0dd5072d4f0de0057c429b0a43585d96798e2a1f2a893bf2"} Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.656876 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.659187 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97821ca1-2978-4fcf-a6cc-fdf101794a17-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wjlfz\" (UID: \"97821ca1-2978-4fcf-a6cc-fdf101794a17\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.662949 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.676134 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.691274 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.691321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xk7z\" (UniqueName: \"kubernetes.io/projected/57218a3f-09f7-4d6a-a308-b17e118f46ae-kube-api-access-5xk7z\") pod \"dns-operator-744455d44c-q7jsq\" (UID: \"57218a3f-09f7-4d6a-a308-b17e118f46ae\") " pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.701454 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtgxc\" (UniqueName: \"kubernetes.io/projected/9cc0327d-c1d0-4177-9670-b53e2e205cbc-kube-api-access-mtgxc\") pod \"machine-config-controller-84d6567774-nfm2r\" (UID: \"9cc0327d-c1d0-4177-9670-b53e2e205cbc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.703117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.718671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9nlr\" (UniqueName: \"kubernetes.io/projected/2e53905c-348b-4d4b-897d-c2e47d3b8562-kube-api-access-s9nlr\") pod \"cluster-samples-operator-665b6dd947-fv2vm\" (UID: \"2e53905c-348b-4d4b-897d-c2e47d3b8562\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.742867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.745941 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.245924006 +0000 UTC m=+142.793522145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.757839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx7r5\" (UniqueName: \"kubernetes.io/projected/d119a06b-0504-4a14-a82e-c8f877c6d01a-kube-api-access-bx7r5\") pod \"service-ca-9c57cc56f-t8x88\" (UID: \"d119a06b-0504-4a14-a82e-c8f877c6d01a\") " pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.780367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86c6k\" (UniqueName: \"kubernetes.io/projected/42ae75c8-e3d2-4328-83ef-4d7279d05abd-kube-api-access-86c6k\") pod \"packageserver-d55dfcdfc-l2x7g\" (UID: \"42ae75c8-e3d2-4328-83ef-4d7279d05abd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.784385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.804062 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf7b8\" (UniqueName: \"kubernetes.io/projected/cedb2565-0837-4473-89e6-84269d6e3766-kube-api-access-wf7b8\") pod \"collect-profiles-29415705-5fszb\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.817980 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqft\" (UniqueName: \"kubernetes.io/projected/698a7180-694e-4712-8087-afa8fd7d6d4f-kube-api-access-rcqft\") pod \"machine-config-server-zrzh2\" (UID: \"698a7180-694e-4712-8087-afa8fd7d6d4f\") " pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.839973 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmjs5\" (UniqueName: \"kubernetes.io/projected/daefbd61-f897-46b5-9e48-d0f03f81aff0-kube-api-access-vmjs5\") pod \"machine-config-operator-74547568cd-8lrbs\" (UID: \"daefbd61-f897-46b5-9e48-d0f03f81aff0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.844107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.844203 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.344188156 +0000 UTC m=+142.891786295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.844411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.844682 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.344674879 +0000 UTC m=+142.892273018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.848715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.864078 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbcdh\" (UniqueName: \"kubernetes.io/projected/e6d32935-4d3d-43c9-b7c7-8735545d39ba-kube-api-access-sbcdh\") pod \"olm-operator-6b444d44fb-6klpw\" (UID: \"e6d32935-4d3d-43c9-b7c7-8735545d39ba\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.867608 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qnpwj"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.881871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.883531 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqs2r\" (UniqueName: \"kubernetes.io/projected/95eba5b0-94bb-4594-a49e-ca21538ef39d-kube-api-access-sqs2r\") pod \"dns-default-5c95q\" (UID: \"95eba5b0-94bb-4594-a49e-ca21538ef39d\") " pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.889157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.895517 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv2gd\" (UniqueName: \"kubernetes.io/projected/d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c-kube-api-access-rv2gd\") pod \"machine-api-operator-5694c8668f-26jzf\" (UID: \"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.914704 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.921974 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.931774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.939112 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jqj\" (UniqueName: \"kubernetes.io/projected/521a1948-1758-4148-be85-f3d91f04aac9-kube-api-access-d7jqj\") pod \"csi-hostpathplugin-l27jv\" (UID: \"521a1948-1758-4148-be85-f3d91f04aac9\") " pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.945802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:53 crc kubenswrapper[4858]: E1205 13:58:53.946190 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.446176499 +0000 UTC m=+142.993774638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.957037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv2p8\" (UniqueName: \"kubernetes.io/projected/2950ccec-35ea-4679-8cf6-1a67f52264b4-kube-api-access-cv2p8\") pod \"catalog-operator-68c6474976-fhlhr\" (UID: \"2950ccec-35ea-4679-8cf6-1a67f52264b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.970228 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.974108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz"] Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.978401 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftqrh\" (UniqueName: \"kubernetes.io/projected/50cce18d-88c6-44b7-9a7d-9a9734a2eba2-kube-api-access-ftqrh\") pod \"package-server-manager-789f6589d5-hsprq\" (UID: \"50cce18d-88c6-44b7-9a7d-9a9734a2eba2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:53 crc kubenswrapper[4858]: I1205 13:58:53.998720 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbtp\" (UniqueName: \"kubernetes.io/projected/1329b103-5d7b-492b-96ed-c7b5b10e8edd-kube-api-access-zpbtp\") pod \"console-f9d7485db-x25gp\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.002952 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq"] Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.010160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.016339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrhz\" (UniqueName: \"kubernetes.io/projected/7224c6fe-8b26-4d04-b5be-20515e19eb5b-kube-api-access-dqrhz\") pod \"multus-admission-controller-857f4d67dd-6qbn5\" (UID: \"7224c6fe-8b26-4d04-b5be-20515e19eb5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.018319 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.024463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.033008 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.035875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqkx\" (UniqueName: \"kubernetes.io/projected/fb636da4-8963-449c-adb8-8ba8d1a66d3b-kube-api-access-6bqkx\") pod \"kube-storage-version-migrator-operator-b67b599dd-r2j8b\" (UID: \"fb636da4-8963-449c-adb8-8ba8d1a66d3b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.041864 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.048474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.048758 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.548746857 +0000 UTC m=+143.096344996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.055048 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.056353 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhpp\" (UniqueName: \"kubernetes.io/projected/61356f17-0b7f-4482-83f2-5a6d542a4e68-kube-api-access-9xhpp\") pod \"console-operator-58897d9998-xxk7s\" (UID: \"61356f17-0b7f-4482-83f2-5a6d542a4e68\") " pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.080116 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.083414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcn8b\" (UniqueName: \"kubernetes.io/projected/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-kube-api-access-vcn8b\") pod \"marketplace-operator-79b997595-9qgzs\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.091128 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zrzh2" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.096332 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv6qx\" (UniqueName: \"kubernetes.io/projected/234d955e-a1e1-4b72-b1d6-da4a4f74f82d-kube-api-access-jv6qx\") pod \"ingress-canary-9dl2k\" (UID: \"234d955e-a1e1-4b72-b1d6-da4a4f74f82d\") " pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.096948 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5c95q" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.120163 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.126085 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9dl2k" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.149046 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.149210 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.649185437 +0000 UTC m=+143.196783576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.149370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.149685 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.64967233 +0000 UTC m=+143.197270469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.171814 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.208421 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22"] Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.213249 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.226690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bpf\" (UniqueName: \"kubernetes.io/projected/80f5ad75-7da0-493a-9fd3-eb605b50e650-kube-api-access-67bpf\") pod \"service-ca-operator-777779d784-g5f8h\" (UID: \"80f5ad75-7da0-493a-9fd3-eb605b50e650\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.226711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ssh9\" (UniqueName: \"kubernetes.io/projected/8a09c06e-57de-4891-b165-b1b42308b23b-kube-api-access-9ssh9\") pod \"migrator-59844c95c7-vfjgg\" (UID: \"8a09c06e-57de-4891-b165-b1b42308b23b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.238800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.249637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.250204 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.750190023 +0000 UTC m=+143.297788162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.282220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.297590 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.349044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.356107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.356519 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.856484533 +0000 UTC m=+143.404082672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.363657 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.456623 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.457354 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.957335445 +0000 UTC m=+143.504933584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.457477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.457735 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:54.957727025 +0000 UTC m=+143.505325164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.533161 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6qbn5"] Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.560372 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.560682 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.060664004 +0000 UTC m=+143.608262143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.660937 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2"] Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.661071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" event={"ID":"97821ca1-2978-4fcf-a6cc-fdf101794a17","Type":"ContainerStarted","Data":"2fb8ec2bf70bef30a601fd8ff321828c2c8a2ecd56d5402a97edb293efec0551"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.661516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.661838 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.161812494 +0000 UTC m=+143.709410633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.662181 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" event={"ID":"ab28fcbb-545b-4e1a-9c37-b3db4335917c","Type":"ContainerStarted","Data":"9c10f0ea8e18c1d0bbda0f07ce63feab761bf9f20ae704d78576aa58820b1426"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.663143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" event={"ID":"a671e973-ca0c-4692-b7ee-fbd76d2c252f","Type":"ContainerStarted","Data":"c991fce6744801926ed52a7ccd45129426c33da5310f3bf3179a7ea71abe5ae2"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.664020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kmzj6" event={"ID":"43c50736-3414-483f-8104-cefb05d4552c","Type":"ContainerStarted","Data":"86ac7376569900c82c5cf3eb2bf49ee3b763309822cd3c44def8d77d417e85c2"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.665325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" event={"ID":"3418e2ae-f14a-42c7-88b7-b46764bd9032","Type":"ContainerStarted","Data":"27e0bfd52d01e22cfdc44ed86efa04649652733ce94bf5779926860f3829e492"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.666278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" event={"ID":"313be014-d206-4d8a-a459-8f1a34bb4e7a","Type":"ContainerStarted","Data":"47c8c9a2ebefd1aaddc24b742d557108b15c6f7c6006e11b0389419344a45921"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.668123 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" event={"ID":"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0","Type":"ContainerStarted","Data":"8a50b43fca9c94cf5c7099ad6b5f8fbd3bbe22312da189f3a0c53c5a7a82d430"} Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.669339 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r2zjn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.669398 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.692160 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr"] Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.762135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.762316 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.262290395 +0000 UTC m=+143.809888544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.762509 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.763294 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.263280531 +0000 UTC m=+143.810878670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.863201 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.863389 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.363364542 +0000 UTC m=+143.910962681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.863492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.863897 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.363885676 +0000 UTC m=+143.911483825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.964557 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.964879 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.46485285 +0000 UTC m=+144.012451009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:54 crc kubenswrapper[4858]: I1205 13:58:54.965086 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:54 crc kubenswrapper[4858]: E1205 13:58:54.965454 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.465446647 +0000 UTC m=+144.013044786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.066288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.066874 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.566853923 +0000 UTC m=+144.114452062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: W1205 13:58:55.072198 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2950ccec_35ea_4679_8cf6_1a67f52264b4.slice/crio-40a4968ca8428cba15005074f8570f3103efbb20cb5f7203952acd6b7488044c WatchSource:0}: Error finding container 40a4968ca8428cba15005074f8570f3103efbb20cb5f7203952acd6b7488044c: Status 404 returned error can't find the container with id 40a4968ca8428cba15005074f8570f3103efbb20cb5f7203952acd6b7488044c Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.170258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.170692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.670678606 +0000 UTC m=+144.218276745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.193907 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" podStartSLOduration=122.193885314 podStartE2EDuration="2m2.193885314s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:55.161617158 +0000 UTC m=+143.709215307" watchObservedRunningTime="2025-12-05 13:58:55.193885314 +0000 UTC m=+143.741483453" Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.272682 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.273033 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.773018188 +0000 UTC m=+144.320616327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.373811 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.374207 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.874189069 +0000 UTC m=+144.421787208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.476556 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.477066 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:55.977046335 +0000 UTC m=+144.524644474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.577743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.578168 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.078150953 +0000 UTC m=+144.625749092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.638886 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb"] Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.686918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.687467 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.187448487 +0000 UTC m=+144.735046626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.728988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rzsvl" event={"ID":"2db6d150-e5c9-41b2-9289-2f6ee74c648b","Type":"ContainerStarted","Data":"fb9c94c0c7484fe505f50c792f8ec6fe59892a80c6a7ead93e4de58c736eb285"} Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.733483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" event={"ID":"2950ccec-35ea-4679-8cf6-1a67f52264b4","Type":"ContainerStarted","Data":"40a4968ca8428cba15005074f8570f3103efbb20cb5f7203952acd6b7488044c"} Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.740144 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5r9lh" podStartSLOduration=123.740128935 podStartE2EDuration="2m3.740128935s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:55.71229313 +0000 UTC m=+144.259891269" watchObservedRunningTime="2025-12-05 13:58:55.740128935 +0000 UTC m=+144.287727074" Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.756923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" event={"ID":"053fb3f3-4898-45f5-abc7-0a14c273bd5b","Type":"ContainerStarted","Data":"4cf92090aaf4d9cb11c39ab5ae1852513621d4a20ef8daea67416edb4f16895d"} Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.789022 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.789353 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.289342007 +0000 UTC m=+144.836940146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.800154 4858 generic.go:334] "Generic (PLEG): container finished" podID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerID="a4e33b4e6a60d0f45ac2a55bba376075c1b0a1bb5b2cc4f6067a09bf082482e5" exitCode=0 Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.800235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" event={"ID":"db8cbc4d-eadf-4949-9b00-760f67bd0442","Type":"ContainerDied","Data":"a4e33b4e6a60d0f45ac2a55bba376075c1b0a1bb5b2cc4f6067a09bf082482e5"} Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.813207 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zrzh2" event={"ID":"698a7180-694e-4712-8087-afa8fd7d6d4f","Type":"ContainerStarted","Data":"d9847a8070dc33055e96d62b10f03dc146d38349929a92e74677504466c278a8"} Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.825082 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" event={"ID":"7224c6fe-8b26-4d04-b5be-20515e19eb5b","Type":"ContainerStarted","Data":"7a8186f26f7aa86c0c85713879adde7087c1cae616fb831ba92ad8f7ec2e3afe"} Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.825729 4858 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-r2zjn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.825773 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.838526 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" podStartSLOduration=123.838511548 podStartE2EDuration="2m3.838511548s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:55.837755918 +0000 UTC m=+144.385354067" watchObservedRunningTime="2025-12-05 13:58:55.838511548 +0000 UTC m=+144.386109687" Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.889649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.890690 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.390668112 +0000 UTC m=+144.938266251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:55 crc kubenswrapper[4858]: I1205 13:58:55.990807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:55 crc kubenswrapper[4858]: E1205 13:58:55.991155 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.491142372 +0000 UTC m=+145.038740511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.006122 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.007939 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9dl2k"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.016048 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-q7jsq"] Dec 05 13:58:56 crc kubenswrapper[4858]: W1205 13:58:56.017481 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod234d955e_a1e1_4b72_b1d6_da4a4f74f82d.slice/crio-edca3f8ccf0aee8264f80441062d3024a2d5bcc7bce52ec6179804c2660bd154 WatchSource:0}: Error finding container edca3f8ccf0aee8264f80441062d3024a2d5bcc7bce52ec6179804c2660bd154: Status 404 returned error can't find the container with id edca3f8ccf0aee8264f80441062d3024a2d5bcc7bce52ec6179804c2660bd154 Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.025507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.046400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.051457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.053893 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.091832 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.092151 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.592124357 +0000 UTC m=+145.139722496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.193896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.194329 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.694312645 +0000 UTC m=+145.241910784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.295132 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.295343 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.795314191 +0000 UTC m=+145.342912330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.295594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.295900 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.795887076 +0000 UTC m=+145.343485215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.340874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-t8x88"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.361684 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.365664 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-l27jv"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.377082 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.379268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xxk7s"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.383160 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-26jzf"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.389684 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.396538 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.396858 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.8968433 +0000 UTC m=+145.444441439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.399371 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9qgzs"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.401716 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.405264 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x25gp"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.411045 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5c95q"] Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.415421 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b"] Dec 05 13:58:56 crc kubenswrapper[4858]: W1205 13:58:56.434373 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95eba5b0_94bb_4594_a49e_ca21538ef39d.slice/crio-66db33dccdabe74035693b3151e68986566e639c58d96cdaf9151e236968e928 WatchSource:0}: Error finding container 66db33dccdabe74035693b3151e68986566e639c58d96cdaf9151e236968e928: Status 404 returned error can't find the container with id 66db33dccdabe74035693b3151e68986566e639c58d96cdaf9151e236968e928 Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.497888 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.498233 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:56.998217526 +0000 UTC m=+145.545815665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.599744 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.600087 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.100061124 +0000 UTC m=+145.647659263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.701523 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.701961 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.201944824 +0000 UTC m=+145.749542963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.806482 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.806630 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.30659816 +0000 UTC m=+145.854196329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.807278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.807737 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.3077204 +0000 UTC m=+145.855318569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.832797 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" event={"ID":"50cce18d-88c6-44b7-9a7d-9a9734a2eba2","Type":"ContainerStarted","Data":"8d3c8e9b56c8cb248c71979dd0c97b62a69701ddc40c58c4d8bfb3a363c2ca71"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.834969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" event={"ID":"57218a3f-09f7-4d6a-a308-b17e118f46ae","Type":"ContainerStarted","Data":"7e0419f0e793d99e14a43c04ca057031fef0aa89120c2807e3b3f9b0facea3bb"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.836765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" event={"ID":"9cc0327d-c1d0-4177-9670-b53e2e205cbc","Type":"ContainerStarted","Data":"c056edff9c747ab60ddf57565780486201bc7e97d1aee8e1eb28d9ade4398f47"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.838316 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" event={"ID":"61356f17-0b7f-4482-83f2-5a6d542a4e68","Type":"ContainerStarted","Data":"04fd6e6672aa506cfc0cb801672d7b8ed1e67ac1da953f2a654c64103f2d9e08"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.840110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5c95q" event={"ID":"95eba5b0-94bb-4594-a49e-ca21538ef39d","Type":"ContainerStarted","Data":"66db33dccdabe74035693b3151e68986566e639c58d96cdaf9151e236968e928"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.841903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" event={"ID":"e6d32935-4d3d-43c9-b7c7-8735545d39ba","Type":"ContainerStarted","Data":"3b6f062735aa16461aaf2bbf38138b69dd7026fbefa15d66798cba5d4742841c"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.844585 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" event={"ID":"80f5ad75-7da0-493a-9fd3-eb605b50e650","Type":"ContainerStarted","Data":"eef3539fc7e0b548d182d639eab37dcf567ddea57f8c901baaff97e601bfef9a"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.845984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" event={"ID":"cedb2565-0837-4473-89e6-84269d6e3766","Type":"ContainerStarted","Data":"d22916b90c9eedea967d813bfee1f1c44bcd69a6c8635410fb214f113a0957ae"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.847576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9dl2k" event={"ID":"234d955e-a1e1-4b72-b1d6-da4a4f74f82d","Type":"ContainerStarted","Data":"edca3f8ccf0aee8264f80441062d3024a2d5bcc7bce52ec6179804c2660bd154"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.848653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" event={"ID":"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee","Type":"ContainerStarted","Data":"39b5ac7aa11971fa7ef839b316e9fb0f6918e7320da16f7cec0b76292dbfc3a7"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.850018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" event={"ID":"42ae75c8-e3d2-4328-83ef-4d7279d05abd","Type":"ContainerStarted","Data":"b0159362d7028e54857b777db8c655a60ad220442b431600c4480a6685bc33c9"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.851435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" event={"ID":"86eb64e6-0d80-466b-842d-1d464e1a7fa9","Type":"ContainerStarted","Data":"f3e77fdb88e3ef9bee744da312c6f65ee44ac428e3459609d2a023f4a6884b2b"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.853398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" event={"ID":"d119a06b-0504-4a14-a82e-c8f877c6d01a","Type":"ContainerStarted","Data":"aa7e4f3fe24c697c736b19698f5c4ab910560ced5afddc52ece00ca717c3712f"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.855429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x25gp" event={"ID":"1329b103-5d7b-492b-96ed-c7b5b10e8edd","Type":"ContainerStarted","Data":"a807aa596b09d99c0278ec930a1d5ee6783b6da60ab51b1d752079aad8eaf1e0"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.856885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" event={"ID":"daefbd61-f897-46b5-9e48-d0f03f81aff0","Type":"ContainerStarted","Data":"bb074db26c7d37ba81a340972c519bbb57d0019a98612112927e49d9bd0ed765"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.858223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" event={"ID":"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c","Type":"ContainerStarted","Data":"0f5672ef4308e6c3990efe57fc915e304340da7a2472e3a5cac355e051eca435"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.859647 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" event={"ID":"8a09c06e-57de-4891-b165-b1b42308b23b","Type":"ContainerStarted","Data":"1247d6c9cf8f47c23dcca9cd9e37d5d9dc7fe75f34ee7239b836614c08cdda99"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.861032 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"0e08cbca761727100bf298fecfdeb157b80ab7178c8946f898127a08c7fc5cc7"} Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.908990 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.909149 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.409120567 +0000 UTC m=+145.956718726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:56 crc kubenswrapper[4858]: I1205 13:58:56.909315 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:56 crc kubenswrapper[4858]: E1205 13:58:56.909854 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.409808785 +0000 UTC m=+145.957406934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.010584 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.010849 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.510791501 +0000 UTC m=+146.058389650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.010982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.011381 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.511369366 +0000 UTC m=+146.058967515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.112221 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.113947 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.613719629 +0000 UTC m=+146.161317768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.214792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.215214 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.715194246 +0000 UTC m=+146.262792395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.316656 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.316728 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.816707356 +0000 UTC m=+146.364305515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.317183 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.317520 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.817504988 +0000 UTC m=+146.365103137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.418511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.418746 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.918718549 +0000 UTC m=+146.466316698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.418898 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.419280 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:57.919270074 +0000 UTC m=+146.466868223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.520196 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.520435 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.020400663 +0000 UTC m=+146.567998842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.520737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.521296 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.021271757 +0000 UTC m=+146.568869936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.622518 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.622696 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.122663194 +0000 UTC m=+146.670261353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.622792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.623143 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.123131236 +0000 UTC m=+146.670729385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.724711 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.725117 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.225093668 +0000 UTC m=+146.772691817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.826463 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.826845 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.326814563 +0000 UTC m=+146.874412692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.868844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" event={"ID":"fb636da4-8963-449c-adb8-8ba8d1a66d3b","Type":"ContainerStarted","Data":"0c2d18ee9fa5743e223609814453d88df75a3d6ff6ae0372f32dbddd6942cd7b"} Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.870069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" event={"ID":"4aded898-143e-40c9-99b8-5dd45d739d64","Type":"ContainerStarted","Data":"0621d52d427001067d9e6b111f50c423f217b12c928d1240a4ed258189ee9558"} Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.927120 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.927529 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.42751238 +0000 UTC m=+146.975110509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:57 crc kubenswrapper[4858]: I1205 13:58:57.927582 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:57 crc kubenswrapper[4858]: E1205 13:58:57.927979 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.427952743 +0000 UTC m=+146.975550882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.029395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.029572 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.529541954 +0000 UTC m=+147.077140093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.029868 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.030268 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.530251094 +0000 UTC m=+147.077849243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.131312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.131424 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.631404323 +0000 UTC m=+147.179002452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.131737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.132306 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.632294147 +0000 UTC m=+147.179892286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.233782 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.234179 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.734129106 +0000 UTC m=+147.281727255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.234324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.234694 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.734678281 +0000 UTC m=+147.282276420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.335692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.336139 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.836121138 +0000 UTC m=+147.383719277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.437659 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.437977 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:58.937964927 +0000 UTC m=+147.485563066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.538685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.538889 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.038805808 +0000 UTC m=+147.586403947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.538997 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.539272 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.03926495 +0000 UTC m=+147.586863089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.640074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.640255 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.140222504 +0000 UTC m=+147.687820643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.640362 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.640699 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.140687108 +0000 UTC m=+147.688285337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.741462 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.741645 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.241618501 +0000 UTC m=+147.789216630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.741804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.742110 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.242097844 +0000 UTC m=+147.789695983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.842844 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.842993 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.342963546 +0000 UTC m=+147.890561685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.843140 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.843536 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.343528991 +0000 UTC m=+147.891127130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.875448 4858 generic.go:334] "Generic (PLEG): container finished" podID="5f47f6b4-2307-4660-b7d6-61a604ee2a81" containerID="638f7eca022fd2a6aa1feee5a7fe01311e065ea5394621a306b4dc85b1354439" exitCode=0 Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.875528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" event={"ID":"5f47f6b4-2307-4660-b7d6-61a604ee2a81","Type":"ContainerDied","Data":"638f7eca022fd2a6aa1feee5a7fe01311e065ea5394621a306b4dc85b1354439"} Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.876794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" event={"ID":"065bd27a-40da-4591-82c4-2c1e8717b9d6","Type":"ContainerStarted","Data":"ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1"} Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.877818 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" event={"ID":"a671e973-ca0c-4692-b7ee-fbd76d2c252f","Type":"ContainerStarted","Data":"9e30eef1ac82beea9eb90499fa2e8a8344f95a207c6a8e65bd430311411d7f43"} Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.878988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kmzj6" event={"ID":"43c50736-3414-483f-8104-cefb05d4552c","Type":"ContainerStarted","Data":"44094343a23a54e638c17e158428c4bca3d6b98d34f1d8310b3875a023636a1f"} Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.880401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" event={"ID":"0bd8b721-b4f7-4be5-bcc8-518c65097fa1","Type":"ContainerStarted","Data":"2f00ed0d4b0fc9e13bad47e613cb8640610688e4d72474c499398fcae018e9dd"} Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.881651 4858 generic.go:334] "Generic (PLEG): container finished" podID="4aded898-143e-40c9-99b8-5dd45d739d64" containerID="0621d52d427001067d9e6b111f50c423f217b12c928d1240a4ed258189ee9558" exitCode=0 Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.881745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" event={"ID":"4aded898-143e-40c9-99b8-5dd45d739d64","Type":"ContainerDied","Data":"0621d52d427001067d9e6b111f50c423f217b12c928d1240a4ed258189ee9558"} Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.882049 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.883253 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfbnh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.883299 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.899666 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" podStartSLOduration=126.899650824 podStartE2EDuration="2m6.899650824s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:58:58.896219109 +0000 UTC m=+147.443817248" watchObservedRunningTime="2025-12-05 13:58:58.899650824 +0000 UTC m=+147.447248963" Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.944631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.944814 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.444798514 +0000 UTC m=+147.992396653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:58 crc kubenswrapper[4858]: I1205 13:58:58.945082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:58 crc kubenswrapper[4858]: E1205 13:58:58.945367 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.445358299 +0000 UTC m=+147.992956438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.046717 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.047232 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.547206978 +0000 UTC m=+148.094805147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.148803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.149202 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.649190521 +0000 UTC m=+148.196788660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.250129 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.250283 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.750261998 +0000 UTC m=+148.297860127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.250496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.250771 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.750760801 +0000 UTC m=+148.298358940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.351654 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.352262 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.8522476 +0000 UTC m=+148.399845729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.453337 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.453768 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:58:59.953753169 +0000 UTC m=+148.501351308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.554199 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.554885 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.054869328 +0000 UTC m=+148.602467467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.656338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.656837 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.156801638 +0000 UTC m=+148.704399777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.757528 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.757708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.757736 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.757775 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.757840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.758492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.758574 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.258558625 +0000 UTC m=+148.806156764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.793440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.795904 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.818129 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.859400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.859744 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.359730125 +0000 UTC m=+148.907328264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.921849 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.935752 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.951767 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.964328 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:58:59 crc kubenswrapper[4858]: E1205 13:58:59.964633 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.464616537 +0000 UTC m=+149.012214676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.983321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5c95q" event={"ID":"95eba5b0-94bb-4594-a49e-ca21538ef39d","Type":"ContainerStarted","Data":"056ba6837f99e2de36b3995e64529f1b70f16503523945cb906521baa9872143"} Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.993899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" event={"ID":"42ae75c8-e3d2-4328-83ef-4d7279d05abd","Type":"ContainerStarted","Data":"575ff2a35e224f67f4185b0dc0a68aeb48e819434e5947e9b41cd1e0a354d056"} Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.995111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" event={"ID":"97821ca1-2978-4fcf-a6cc-fdf101794a17","Type":"ContainerStarted","Data":"7df0f0f68ea1772a913d3da923ab8f3bab107e5391f9598be318dcc90c66b42d"} Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.996083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" event={"ID":"86eb64e6-0d80-466b-842d-1d464e1a7fa9","Type":"ContainerStarted","Data":"549212aab8f01b483bd78b3e90d513fc6b436187890a29e09f566704788b29c9"} Dec 05 13:58:59 crc kubenswrapper[4858]: I1205 13:58:59.997031 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" event={"ID":"57218a3f-09f7-4d6a-a308-b17e118f46ae","Type":"ContainerStarted","Data":"a2a496a029e4aaf59f9fa0461634b1ca73e8694bebb03d53cb52c8d498913a7f"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:58:59.997965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zrzh2" event={"ID":"698a7180-694e-4712-8087-afa8fd7d6d4f","Type":"ContainerStarted","Data":"b555d5f01892f93710f58ce4f5af01b7bc6026b77f2134ab193df1469b24ff23"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.001673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" event={"ID":"9cc0327d-c1d0-4177-9670-b53e2e205cbc","Type":"ContainerStarted","Data":"7e731c4f635650f6b57096f4b942a5157bd50cc8af6f9038a20ca928ded7b92d"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.006946 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" event={"ID":"053fb3f3-4898-45f5-abc7-0a14c273bd5b","Type":"ContainerStarted","Data":"3703dc79c7b4ee92218dc3f822d8e50e0741cdfe650d387a65c455741f23949d"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.012891 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" event={"ID":"cedb2565-0837-4473-89e6-84269d6e3766","Type":"ContainerStarted","Data":"c2104c72b6990c443ed3bc7434b5b7ccc9fcc3df8306832fa903138c5327e226"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.021046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" event={"ID":"80f5ad75-7da0-493a-9fd3-eb605b50e650","Type":"ContainerStarted","Data":"b6d0eb9c296bc6483bc7f3b4e6cae266c82a939633dcedfb61d191db218e6fd0"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.036615 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" event={"ID":"d119a06b-0504-4a14-a82e-c8f877c6d01a","Type":"ContainerStarted","Data":"5afe47ef26ecd8ef758ff78e64c5cd0b5b1c811ffd7efa206d21eab08c98463a"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.037812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x25gp" event={"ID":"1329b103-5d7b-492b-96ed-c7b5b10e8edd","Type":"ContainerStarted","Data":"df560568844a9e0e9ced309a9d458ca9b9c1c357374e4e5c02d83679a7ccd1ce"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.038867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" event={"ID":"daefbd61-f897-46b5-9e48-d0f03f81aff0","Type":"ContainerStarted","Data":"a627ace725002e69c39f6f2250837d7948506d9690783f19db6da7efa4a7dce0"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.039868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" event={"ID":"3418e2ae-f14a-42c7-88b7-b46764bd9032","Type":"ContainerStarted","Data":"6c424bc2a56d2daf6003db49c0746c6f0bdf67d9a0f67e3a1432dd75c124fdf0"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.040850 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" event={"ID":"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee","Type":"ContainerStarted","Data":"27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.041781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" event={"ID":"2950ccec-35ea-4679-8cf6-1a67f52264b4","Type":"ContainerStarted","Data":"798e48a39743b6fd6cb68ef153173cf96417dcd986b150ffa6431e3828cf20b2"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.042856 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.043655 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.043687 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.044812 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" event={"ID":"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c","Type":"ContainerStarted","Data":"e368def7b6d4856311a8b98ad1c4c8de33e2b20d4e9d25bf7ffe3b2701e1aa8f"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.045978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" event={"ID":"50cce18d-88c6-44b7-9a7d-9a9734a2eba2","Type":"ContainerStarted","Data":"2997386d895ae53a31b83cce7361b0ee4f0117e07b9be335d2484848e7e507e6"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.048896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" event={"ID":"db8cbc4d-eadf-4949-9b00-760f67bd0442","Type":"ContainerStarted","Data":"5d5e56deb692818aca7f22a2b4d45f29105c2352931b33f27871b2cccbbb1f24"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.052791 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" event={"ID":"fb636da4-8963-449c-adb8-8ba8d1a66d3b","Type":"ContainerStarted","Data":"c1931391d76b915d8233447da40de528767d7ef28a21ead6ba97c5ffb89467fd"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.057284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" event={"ID":"8a09c06e-57de-4891-b165-b1b42308b23b","Type":"ContainerStarted","Data":"81fe51a502d619b776c621e44bac55e01f5eee128b280821e693b12ef4133031"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.066367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" event={"ID":"7224c6fe-8b26-4d04-b5be-20515e19eb5b","Type":"ContainerStarted","Data":"b58f8a0a93620ef2b1e743c8041be6fe2a4dee02c1a56f8580bd6556cd967b18"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.066467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.067003 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.566885697 +0000 UTC m=+149.114483836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.067396 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podStartSLOduration=127.067387291 podStartE2EDuration="2m7.067387291s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:00.066484886 +0000 UTC m=+148.614083015" watchObservedRunningTime="2025-12-05 13:59:00.067387291 +0000 UTC m=+148.614985430" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.085064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" event={"ID":"2e53905c-348b-4d4b-897d-c2e47d3b8562","Type":"ContainerStarted","Data":"784b5bd6516f24c7bfea044ffc077b5a78c7043ea2e431d05056cd8a26fcaeff"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.086205 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" event={"ID":"313be014-d206-4d8a-a459-8f1a34bb4e7a","Type":"ContainerStarted","Data":"d4b70b4a57b917850f4474bc790f8bf147b7f09ea0aedf440c56626e79a38163"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.087080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" event={"ID":"ab28fcbb-545b-4e1a-9c37-b3db4335917c","Type":"ContainerStarted","Data":"9bf7c9ac2c14b7bdb7caa117039633003ed964782dfa1a909cf1dfaa67192f1c"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.088344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9dl2k" event={"ID":"234d955e-a1e1-4b72-b1d6-da4a4f74f82d","Type":"ContainerStarted","Data":"babfb4cbfa6b3c35dbf8717731f81c43cb4b6b5b7bdb2614732674f62bc35980"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.104127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" event={"ID":"61356f17-0b7f-4482-83f2-5a6d542a4e68","Type":"ContainerStarted","Data":"f3bc58ea40908d12359639c279624e7b565c36fc38052593e24777f8cc42599b"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.108443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" event={"ID":"8fbbfe0b-3a39-4a71-8ee8-fcce371b97b0","Type":"ContainerStarted","Data":"c8bb3b13e0e5e7327269db278fca27eff584de956d719223bfff4f8ae0a5e067"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.109976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" event={"ID":"e6d32935-4d3d-43c9-b7c7-8735545d39ba","Type":"ContainerStarted","Data":"f3f5c30933509f77db98062d579f5aa5563b81d8601072a8ff6980f2a57df5a2"} Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.113085 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.113105 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.113179 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfbnh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.113203 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.127922 4858 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4zztz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.127970 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.128031 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.128045 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.151877 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n6wsw" podStartSLOduration=128.151855072 podStartE2EDuration="2m8.151855072s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:00.141372374 +0000 UTC m=+148.688970513" watchObservedRunningTime="2025-12-05 13:59:00.151855072 +0000 UTC m=+148.699453211" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.167725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.169183 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.669167528 +0000 UTC m=+149.216765667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.194385 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lxq22" podStartSLOduration=127.19436977 podStartE2EDuration="2m7.19436977s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:00.193641231 +0000 UTC m=+148.741239370" watchObservedRunningTime="2025-12-05 13:59:00.19436977 +0000 UTC m=+148.741967909" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.236006 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-rzsvl" podStartSLOduration=128.235984404 podStartE2EDuration="2m8.235984404s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:00.222533734 +0000 UTC m=+148.770131873" watchObservedRunningTime="2025-12-05 13:59:00.235984404 +0000 UTC m=+148.783582543" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.270448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.270756 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.770740969 +0000 UTC m=+149.318339108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.278703 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kmzj6" podStartSLOduration=127.278679607 podStartE2EDuration="2m7.278679607s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:00.275455038 +0000 UTC m=+148.823053187" watchObservedRunningTime="2025-12-05 13:59:00.278679607 +0000 UTC m=+148.826277746" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.372914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.373546 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.873527153 +0000 UTC m=+149.421125292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.416222 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" podStartSLOduration=128.416206696 podStartE2EDuration="2m8.416206696s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:00.331214891 +0000 UTC m=+148.878813030" watchObservedRunningTime="2025-12-05 13:59:00.416206696 +0000 UTC m=+148.963804825" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.474789 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.475335 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:00.975322201 +0000 UTC m=+149.522920340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.592469 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.593071 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.093051966 +0000 UTC m=+149.640650105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.684175 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.684340 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.684364 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.697564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.697988 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.1979714 +0000 UTC m=+149.745569539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.798287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.799227 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.29920166 +0000 UTC m=+149.846799799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.799505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.799968 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.299953531 +0000 UTC m=+149.847551680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.901149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.901290 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.401264635 +0000 UTC m=+149.948862774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:00 crc kubenswrapper[4858]: I1205 13:59:00.901653 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:00 crc kubenswrapper[4858]: E1205 13:59:00.902012 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.401999716 +0000 UTC m=+149.949597855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.002802 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.002961 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.502935599 +0000 UTC m=+150.050533738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.003084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.003450 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.503441593 +0000 UTC m=+150.051039732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.104116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.104487 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.604471749 +0000 UTC m=+150.152069888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.136427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" event={"ID":"97821ca1-2978-4fcf-a6cc-fdf101794a17","Type":"ContainerStarted","Data":"1b99f275a33ede00934db9723eabd90d3d0a05a99c54e58a7a37162dc8a342a0"} Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.138192 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"43c65156e4d2755291c64302c050063532173ee74ad44bf8fbc06aca26f58be1"} Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.154122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2751dedc942dd0218e701dbdd08ef76c74e60c6fe705c14ba6b1c82d1b7f2ca4"} Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.156466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"714fab6a4b4ed795f4c07ad114c7088986813b0085cdbf2109f32a7e1c39a10a"} Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.158324 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" event={"ID":"2e53905c-348b-4d4b-897d-c2e47d3b8562","Type":"ContainerStarted","Data":"78688f93d967f00bb7c71f7bc4343ec68998e1fb4376e1ee9ced2859e1c2018f"} Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.163299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d2ba03ecf93799105750fb7d9c5294f98cdc6aed842e3434eb89845453abf310"} Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.164623 4858 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4zztz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.164658 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.166954 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.166992 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.169540 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.170712 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.170744 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.205118 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.205434 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.705420233 +0000 UTC m=+150.253018372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.220240 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xgwpc" podStartSLOduration=128.2202207 podStartE2EDuration="2m8.2202207s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.219118599 +0000 UTC m=+149.766716738" watchObservedRunningTime="2025-12-05 13:59:01.2202207 +0000 UTC m=+149.767818839" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.221130 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g5f8h" podStartSLOduration=128.221124345 podStartE2EDuration="2m8.221124345s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.18199839 +0000 UTC m=+149.729596529" watchObservedRunningTime="2025-12-05 13:59:01.221124345 +0000 UTC m=+149.768722494" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.256508 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podStartSLOduration=128.256491526 podStartE2EDuration="2m8.256491526s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.254304606 +0000 UTC m=+149.801902745" watchObservedRunningTime="2025-12-05 13:59:01.256491526 +0000 UTC m=+149.804089665" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.277484 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" podStartSLOduration=128.277467873 podStartE2EDuration="2m8.277467873s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.276525217 +0000 UTC m=+149.824123356" watchObservedRunningTime="2025-12-05 13:59:01.277467873 +0000 UTC m=+149.825066012" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.308304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.309788 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.809773731 +0000 UTC m=+150.357371870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.324978 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s5cwr" podStartSLOduration=129.324967978 podStartE2EDuration="2m9.324967978s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.299502059 +0000 UTC m=+149.847100198" watchObservedRunningTime="2025-12-05 13:59:01.324967978 +0000 UTC m=+149.872566117" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.352145 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fq6mq" podStartSLOduration=128.352131874 podStartE2EDuration="2m8.352131874s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.351222959 +0000 UTC m=+149.898821098" watchObservedRunningTime="2025-12-05 13:59:01.352131874 +0000 UTC m=+149.899730013" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.354698 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podStartSLOduration=129.354688675 podStartE2EDuration="2m9.354688675s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.327156208 +0000 UTC m=+149.874754347" watchObservedRunningTime="2025-12-05 13:59:01.354688675 +0000 UTC m=+149.902286814" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.404926 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r2j8b" podStartSLOduration=128.404908215 podStartE2EDuration="2m8.404908215s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.403323411 +0000 UTC m=+149.950921540" watchObservedRunningTime="2025-12-05 13:59:01.404908215 +0000 UTC m=+149.952506354" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.405022 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9dl2k" podStartSLOduration=10.405017648 podStartE2EDuration="10.405017648s" podCreationTimestamp="2025-12-05 13:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.377293886 +0000 UTC m=+149.924892015" watchObservedRunningTime="2025-12-05 13:59:01.405017648 +0000 UTC m=+149.952615787" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.410249 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.410728 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:01.910714194 +0000 UTC m=+150.458312333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.444352 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rpkw2" podStartSLOduration=128.444327668 podStartE2EDuration="2m8.444327668s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.440086621 +0000 UTC m=+149.987684770" watchObservedRunningTime="2025-12-05 13:59:01.444327668 +0000 UTC m=+149.991925807" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.498518 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x25gp" podStartSLOduration=129.498502717 podStartE2EDuration="2m9.498502717s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.497509669 +0000 UTC m=+150.045107818" watchObservedRunningTime="2025-12-05 13:59:01.498502717 +0000 UTC m=+150.046100856" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.498805 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-lw86k" podStartSLOduration=129.498799685 podStartE2EDuration="2m9.498799685s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.466934429 +0000 UTC m=+150.014532598" watchObservedRunningTime="2025-12-05 13:59:01.498799685 +0000 UTC m=+150.046397824" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.515637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.516029 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.016012837 +0000 UTC m=+150.563610976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.523752 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zrzh2" podStartSLOduration=11.52373414 podStartE2EDuration="11.52373414s" podCreationTimestamp="2025-12-05 13:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.519951556 +0000 UTC m=+150.067549695" watchObservedRunningTime="2025-12-05 13:59:01.52373414 +0000 UTC m=+150.071332279" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.559462 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qnpwj" podStartSLOduration=129.559442781 podStartE2EDuration="2m9.559442781s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.557245491 +0000 UTC m=+150.104843640" watchObservedRunningTime="2025-12-05 13:59:01.559442781 +0000 UTC m=+150.107040930" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.593741 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podStartSLOduration=129.593701683 podStartE2EDuration="2m9.593701683s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.592045806 +0000 UTC m=+150.139643945" watchObservedRunningTime="2025-12-05 13:59:01.593701683 +0000 UTC m=+150.141299822" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.608854 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-t8x88" podStartSLOduration=128.608819217 podStartE2EDuration="2m8.608819217s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.607306196 +0000 UTC m=+150.154904345" watchObservedRunningTime="2025-12-05 13:59:01.608819217 +0000 UTC m=+150.156417356" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.619237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.619698 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.119682216 +0000 UTC m=+150.667280355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.665145 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.665204 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.720853 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.721396 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.221375131 +0000 UTC m=+150.768973270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.822277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.822548 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.32253677 +0000 UTC m=+150.870134909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:01 crc kubenswrapper[4858]: I1205 13:59:01.934493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:01 crc kubenswrapper[4858]: E1205 13:59:01.938776 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.438751224 +0000 UTC m=+150.986349373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.058684 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.058671 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" podStartSLOduration=130.058657489 podStartE2EDuration="2m10.058657489s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:01.635907122 +0000 UTC m=+150.183505271" watchObservedRunningTime="2025-12-05 13:59:02.058657489 +0000 UTC m=+150.606255628" Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.059274 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.559258955 +0000 UTC m=+151.106857094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.160310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.160922 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.660890588 +0000 UTC m=+151.208488727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.262110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.266414 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.266483 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.266945 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.266994 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.269219 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.769185294 +0000 UTC m=+151.316783433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.336578 4858 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4zztz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.338868 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.367103 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.367454 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.867437003 +0000 UTC m=+151.415035142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.420532 4858 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wfbnh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.420995 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.474263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.474696 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:02.974682191 +0000 UTC m=+151.522280330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.508497 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.511919 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.593611 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.594692 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.094675989 +0000 UTC m=+151.642274118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.696586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.696966 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.196951519 +0000 UTC m=+151.744549658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.711470 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5c95q" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.723608 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.723689 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.723717 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.730313 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.730347 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.730410 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6klpw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.730426 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podUID="e6d32935-4d3d-43c9-b7c7-8735545d39ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.738590 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:02 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:02 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:02 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.738645 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.818525 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.818598 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.318581941 +0000 UTC m=+151.866180080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.819131 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.820961 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.320948876 +0000 UTC m=+151.868547015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:02 crc kubenswrapper[4858]: I1205 13:59:02.924720 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:02 crc kubenswrapper[4858]: E1205 13:59:02.925754 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.425736515 +0000 UTC m=+151.973334654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.032936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.033213 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.533199978 +0000 UTC m=+152.080798117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.133632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.134063 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.634014619 +0000 UTC m=+152.181612758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.230922 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podStartSLOduration=130.230904321 podStartE2EDuration="2m10.230904321s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.191154679 +0000 UTC m=+151.738752818" watchObservedRunningTime="2025-12-05 13:59:03.230904321 +0000 UTC m=+151.778502460" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.234927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.235398 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.735386464 +0000 UTC m=+152.282984603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.296403 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" podStartSLOduration=130.29636493 podStartE2EDuration="2m10.29636493s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.230788038 +0000 UTC m=+151.778386177" watchObservedRunningTime="2025-12-05 13:59:03.29636493 +0000 UTC m=+151.843963079" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.297261 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.320248 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" podStartSLOduration=130.320230396 podStartE2EDuration="2m10.320230396s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.319568107 +0000 UTC m=+151.867166266" watchObservedRunningTime="2025-12-05 13:59:03.320230396 +0000 UTC m=+151.867828535" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.321302 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" podStartSLOduration=130.321295745 podStartE2EDuration="2m10.321295745s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.296995807 +0000 UTC m=+151.844593946" watchObservedRunningTime="2025-12-05 13:59:03.321295745 +0000 UTC m=+151.868893884" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.348454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.348943 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.848926964 +0000 UTC m=+152.396525093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.387888 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wjlfz" podStartSLOduration=130.387868324 podStartE2EDuration="2m10.387868324s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.357665755 +0000 UTC m=+151.905263894" watchObservedRunningTime="2025-12-05 13:59:03.387868324 +0000 UTC m=+151.935466463" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.414805 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5c95q" podStartSLOduration=13.414784393 podStartE2EDuration="13.414784393s" podCreationTimestamp="2025-12-05 13:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.413317083 +0000 UTC m=+151.960915222" watchObservedRunningTime="2025-12-05 13:59:03.414784393 +0000 UTC m=+151.962382532" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.449098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.449398 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:03.949384654 +0000 UTC m=+152.496982793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.551207 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.551454 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.051436809 +0000 UTC m=+152.599034948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.652699 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.653284 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.153267707 +0000 UTC m=+152.700865856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.696111 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.703731 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:03 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:03 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:03 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.703773 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.728782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" event={"ID":"57218a3f-09f7-4d6a-a308-b17e118f46ae","Type":"ContainerStarted","Data":"3ef44ef9913cc7c8fbc190f498c408fecb50d1b8f7e47565de46ff27f47ca753"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.730312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" event={"ID":"daefbd61-f897-46b5-9e48-d0f03f81aff0","Type":"ContainerStarted","Data":"75e656abc0b8dfdf5380e6fc448c15f98aa98172b401b91b1d2070f9852fa0c3"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.732020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" event={"ID":"9cc0327d-c1d0-4177-9670-b53e2e205cbc","Type":"ContainerStarted","Data":"4df43461d863151d306f4b81c7e848666a81e4e244ca1aed7e822befe3d5cadf"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.746277 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vfjgg" event={"ID":"8a09c06e-57de-4891-b165-b1b42308b23b","Type":"ContainerStarted","Data":"44639eba176cd50f78e3415d1cbad1c19faf45336801214807a7e97bf1e397f2"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.754479 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.754630 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.254613032 +0000 UTC m=+152.802211161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.755404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.757956 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.257942123 +0000 UTC m=+152.805540262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.767630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" event={"ID":"5f47f6b4-2307-4660-b7d6-61a604ee2a81","Type":"ContainerStarted","Data":"0a22501dae9dfed81a2acae028ddca1a3f23c692e462b24f8585a294e79b3a35"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.767690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" event={"ID":"5f47f6b4-2307-4660-b7d6-61a604ee2a81","Type":"ContainerStarted","Data":"fa1864abd52a98940ac1c6a6b9b9ea16d2bfdd16b89167abb3371c346a65e836"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.775026 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-q7jsq" podStartSLOduration=131.775011143 podStartE2EDuration="2m11.775011143s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.772802471 +0000 UTC m=+152.320400600" watchObservedRunningTime="2025-12-05 13:59:03.775011143 +0000 UTC m=+152.322609282" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.777042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5c95q" event={"ID":"95eba5b0-94bb-4594-a49e-ca21538ef39d","Type":"ContainerStarted","Data":"8801b93cfaa2ee2b11cea477a9a69cea9eb4ba872c6cd7bbb71cfa6aaef5c3fa"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.779074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-26jzf" event={"ID":"d2ff5e71-d11f-4276-8bd9-2bea3cb5ba9c","Type":"ContainerStarted","Data":"0e66b384692ea36c9c57173de6070f7c4bd27816c7ff0dcac34a09327ecff44b"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.780846 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" event={"ID":"2e53905c-348b-4d4b-897d-c2e47d3b8562","Type":"ContainerStarted","Data":"3726e30d6d9ff697f00b4f2082024c95ddf46c8b406126e3ef93763352da8031"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.783520 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"86146ff6be4fbdea2dbfa27f04fa5b249da125f1f078edda94aeb5ba46d8f665"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.785554 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" event={"ID":"50cce18d-88c6-44b7-9a7d-9a9734a2eba2","Type":"ContainerStarted","Data":"80106f46467fbb515028bcb762bb7d38fcbd08378e95aa63659fe5045e58fce7"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.789700 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" event={"ID":"4aded898-143e-40c9-99b8-5dd45d739d64","Type":"ContainerStarted","Data":"db26b74f94658b708f827c6f20d68e9199423c2f0f5b182e9e81b3aab7a23373"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.794140 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1bc5d66e5aca9bb1ecf60376a08282c2e2dd9e694cc89177a9e1e682f5a7b326"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.794284 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.800250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ffc319cd0d01272a0564e6bfaf496b5fefb0bbc636f3341849aa26db7207ee42"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.808893 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" event={"ID":"7224c6fe-8b26-4d04-b5be-20515e19eb5b","Type":"ContainerStarted","Data":"a638c70290aba0cf573b0aeabd480a4064130f4c48c822228b340db9f5eb448f"} Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.812622 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6klpw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.812667 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podUID="e6d32935-4d3d-43c9-b7c7-8735545d39ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.815414 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8lrbs" podStartSLOduration=130.815398241 podStartE2EDuration="2m10.815398241s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.813107899 +0000 UTC m=+152.360706038" watchObservedRunningTime="2025-12-05 13:59:03.815398241 +0000 UTC m=+152.362996370" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.857361 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.858076 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.358046004 +0000 UTC m=+152.905644143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.861775 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nfm2r" podStartSLOduration=130.861758926 podStartE2EDuration="2m10.861758926s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.857500239 +0000 UTC m=+152.405098398" watchObservedRunningTime="2025-12-05 13:59:03.861758926 +0000 UTC m=+152.409357065" Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.958517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:03 crc kubenswrapper[4858]: E1205 13:59:03.958850 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.458834083 +0000 UTC m=+153.006432232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:03 crc kubenswrapper[4858]: I1205 13:59:03.995226 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fv2vm" podStartSLOduration=131.995206093 podStartE2EDuration="2m11.995206093s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.930169695 +0000 UTC m=+152.477767834" watchObservedRunningTime="2025-12-05 13:59:03.995206093 +0000 UTC m=+152.542804242" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.059722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.060030 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.560013384 +0000 UTC m=+153.107611523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.060249 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.061804 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.061849 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.061880 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.061947 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.061985 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6klpw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062006 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podUID="e6d32935-4d3d-43c9-b7c7-8735545d39ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062034 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6klpw container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062051 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podUID="e6d32935-4d3d-43c9-b7c7-8735545d39ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062065 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062079 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062121 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062138 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062294 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.062314 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.081274 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6qbn5" podStartSLOduration=131.081259967 podStartE2EDuration="2m11.081259967s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:03.996997412 +0000 UTC m=+152.544595561" watchObservedRunningTime="2025-12-05 13:59:04.081259967 +0000 UTC m=+152.628858106" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.083736 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" podStartSLOduration=131.083728855 podStartE2EDuration="2m11.083728855s" podCreationTimestamp="2025-12-05 13:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:04.08060136 +0000 UTC m=+152.628199499" watchObservedRunningTime="2025-12-05 13:59:04.083728855 +0000 UTC m=+152.631326994" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.165356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.166539 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.66652388 +0000 UTC m=+153.214122019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.228786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229117 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229157 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229204 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229214 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229372 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229403 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229420 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229417 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229692 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.229743 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": dial tcp 10.217.0.33:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.239973 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.240026 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.241015 4858 patch_prober.go:28] interesting pod/console-f9d7485db-x25gp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.241059 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x25gp" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.266022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.266412 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.766392355 +0000 UTC m=+153.313990504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.378051 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.378396 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.878382583 +0000 UTC m=+153.425980712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.380874 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.381116 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9qgzs container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.381139 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.381189 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9qgzs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.381201 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.381338 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9qgzs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.381353 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.481883 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.482058 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.982032281 +0000 UTC m=+153.529630420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.482287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.482582 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:04.982575285 +0000 UTC m=+153.530173424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.583687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.584110 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.084089474 +0000 UTC m=+153.631687613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.671301 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:04 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:04 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:04 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.671369 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.684542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.684920 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.184904615 +0000 UTC m=+153.732502754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.786997 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.787196 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.287169285 +0000 UTC m=+153.834767424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.787440 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.787789 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.287775752 +0000 UTC m=+153.835373891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.890229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.890419 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.390384072 +0000 UTC m=+153.937982211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:04 crc kubenswrapper[4858]: I1205 13:59:04.890620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:04 crc kubenswrapper[4858]: E1205 13:59:04.893604 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.393592079 +0000 UTC m=+153.941190218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:04.988679 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" podStartSLOduration=132.988664302 podStartE2EDuration="2m12.988664302s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:04.976050125 +0000 UTC m=+153.523648264" watchObservedRunningTime="2025-12-05 13:59:04.988664302 +0000 UTC m=+153.536262441" Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:04.991346 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:04.991624 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.491611493 +0000 UTC m=+154.039209632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.092192 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.092475 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.592463065 +0000 UTC m=+154.140061204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.192888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.193092 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.693077079 +0000 UTC m=+154.240675218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.294252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.294809 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.794794094 +0000 UTC m=+154.342392243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.395691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.396164 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.896144769 +0000 UTC m=+154.443742908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.497341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.497696 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:05.997683279 +0000 UTC m=+154.545281418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.597926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.598142 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.0981258 +0000 UTC m=+154.645723939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.666967 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:05 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:05 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:05 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.667016 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.699138 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.699427 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.199415823 +0000 UTC m=+154.747013962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.800479 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.800810 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.300796128 +0000 UTC m=+154.848394267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:05 crc kubenswrapper[4858]: I1205 13:59:05.912233 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:05 crc kubenswrapper[4858]: E1205 13:59:05.912817 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.412798416 +0000 UTC m=+154.960396555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.015122 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.015282 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.515265262 +0000 UTC m=+155.062863391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.043866 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.044437 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.048370 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.048534 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.099568 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.116664 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.116939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11aed447-46fa-47b9-964f-ee26867aa8e1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.116955 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.616941826 +0000 UTC m=+155.164539965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.117094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11aed447-46fa-47b9-964f-ee26867aa8e1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.218367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.218539 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11aed447-46fa-47b9-964f-ee26867aa8e1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.218569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11aed447-46fa-47b9-964f-ee26867aa8e1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.218664 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11aed447-46fa-47b9-964f-ee26867aa8e1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.218737 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.718722283 +0000 UTC m=+155.266320422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.319259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.319616 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.819601355 +0000 UTC m=+155.367199494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.365557 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11aed447-46fa-47b9-964f-ee26867aa8e1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.421032 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.421301 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:06.921286039 +0000 UTC m=+155.468884168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.525338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.526135 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.026119759 +0000 UTC m=+155.573717898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.533310 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.533940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.557455 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.557562 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.562465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.626475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.626755 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.126724284 +0000 UTC m=+155.674322423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.627011 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.627353 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.127344021 +0000 UTC m=+155.674942160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.665776 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.682870 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:06 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:06 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:06 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.682925 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.731178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.731355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dc8df9-662d-49a6-a604-ee0294519e50-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.731389 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3dc8df9-662d-49a6-a604-ee0294519e50-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.731501 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.231487102 +0000 UTC m=+155.779085241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.836423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.838301 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.338284987 +0000 UTC m=+155.885883126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.842930 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dc8df9-662d-49a6-a604-ee0294519e50-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.842970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3dc8df9-662d-49a6-a604-ee0294519e50-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.843048 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3dc8df9-662d-49a6-a604-ee0294519e50-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.857028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"64c8895a7e09fb25f03a7f4d9304d6e9fa0039e4fb829eeb9c5545d9b86c887b"} Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.944048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.944190 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.444163487 +0000 UTC m=+155.991761626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.944288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:06 crc kubenswrapper[4858]: E1205 13:59:06.944642 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.444627809 +0000 UTC m=+155.992225948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:06 crc kubenswrapper[4858]: I1205 13:59:06.983459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dc8df9-662d-49a6-a604-ee0294519e50-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.045638 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.046133 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.546117798 +0000 UTC m=+156.093715937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.147001 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.147347 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.647329799 +0000 UTC m=+156.194927938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.182204 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.204123 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.204178 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.204523 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.204548 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.248113 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.248398 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.748381246 +0000 UTC m=+156.295979385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.248611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.248844 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.748836169 +0000 UTC m=+156.296434308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.353588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.353947 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.853931217 +0000 UTC m=+156.401529356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.427774 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.428998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.429738 4858 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c7tvn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.429767 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" podUID="5f47f6b4-2307-4660-b7d6-61a604ee2a81" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.456299 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.456552 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:07.956540786 +0000 UTC m=+156.504138925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.477248 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 13:59:07 crc kubenswrapper[4858]: W1205 13:59:07.520414 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod11aed447_46fa_47b9_964f_ee26867aa8e1.slice/crio-a0e79dba0f651491c1948c716e51a9a6aa9704139ae415ae8cb1c7f31a0675a2 WatchSource:0}: Error finding container a0e79dba0f651491c1948c716e51a9a6aa9704139ae415ae8cb1c7f31a0675a2: Status 404 returned error can't find the container with id a0e79dba0f651491c1948c716e51a9a6aa9704139ae415ae8cb1c7f31a0675a2 Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.565002 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.565333 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.065316816 +0000 UTC m=+156.612914955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.680513 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.681116 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.181104356 +0000 UTC m=+156.728702495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.689492 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:07 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:07 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:07 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.689538 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.694861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.695720 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.784289 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.784597 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.2845724 +0000 UTC m=+156.832170539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.784632 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.785440 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.285431764 +0000 UTC m=+156.833029903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.886259 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.886937 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.386912072 +0000 UTC m=+156.934510211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.897371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11aed447-46fa-47b9-964f-ee26867aa8e1","Type":"ContainerStarted","Data":"a0e79dba0f651491c1948c716e51a9a6aa9704139ae415ae8cb1c7f31a0675a2"} Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.926055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"9b8bc288bf89aeca330b18edafa20c5044f4df585b4e97bb2f21cbaed8f9cd79"} Dec 05 13:59:07 crc kubenswrapper[4858]: I1205 13:59:07.991806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:07 crc kubenswrapper[4858]: E1205 13:59:07.994296 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.494279123 +0000 UTC m=+157.041877262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.093249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.093444 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.593417397 +0000 UTC m=+157.141015536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.093525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.093848 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.593819608 +0000 UTC m=+157.141417747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.197791 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.198028 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.698012941 +0000 UTC m=+157.245611080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.302197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.302508 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.802496452 +0000 UTC m=+157.350094591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.434218 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.434504 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:08.934490449 +0000 UTC m=+157.482088588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.440163 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2g9nd"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.441066 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.478800 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.517188 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6cpxg"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.518272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.535627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-catalog-content\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.535708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.535765 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwpm\" (UniqueName: \"kubernetes.io/projected/3175524c-136d-44a0-9324-0d063376c05f-kube-api-access-sdwpm\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.535787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-utilities\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.536057 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.03603882 +0000 UTC m=+157.583636959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.543160 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.601208 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6cpxg"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637137 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-catalog-content\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwpm\" (UniqueName: \"kubernetes.io/projected/3175524c-136d-44a0-9324-0d063376c05f-kube-api-access-sdwpm\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-utilities\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637397 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/e65f2d84-01e5-440d-b92c-79227561f3c0-kube-api-access-9vkc6\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-utilities\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637453 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-catalog-content\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.637547 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.137531048 +0000 UTC m=+157.685129187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.637912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-catalog-content\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.638361 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-utilities\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.672257 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:08 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:08 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:08 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.672316 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.695706 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2g9nd"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.706584 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d27qp"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.707709 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.724150 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwpm\" (UniqueName: \"kubernetes.io/projected/3175524c-136d-44a0-9324-0d063376c05f-kube-api-access-sdwpm\") pod \"certified-operators-2g9nd\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.741527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-utilities\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.741583 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/e65f2d84-01e5-440d-b92c-79227561f3c0-kube-api-access-9vkc6\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.741628 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-catalog-content\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.741661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.741937 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.241925627 +0000 UTC m=+157.789523756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.742274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-utilities\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.742733 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-catalog-content\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.763767 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.792500 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/e65f2d84-01e5-440d-b92c-79227561f3c0-kube-api-access-9vkc6\") pod \"community-operators-6cpxg\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.832166 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.842545 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.842807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dfr8\" (UniqueName: \"kubernetes.io/projected/825a6e39-523e-4040-bee6-14b3ed5d2000-kube-api-access-4dfr8\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.842862 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-catalog-content\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.842910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-utilities\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.843006 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.342974624 +0000 UTC m=+157.890572773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: W1205 13:59:08.855040 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc3dc8df9_662d_49a6_a604_ee0294519e50.slice/crio-c48448768e0920a7172ea039a758ddf38b7de697fec69e89e370bf91cb52378a WatchSource:0}: Error finding container c48448768e0920a7172ea039a758ddf38b7de697fec69e89e370bf91cb52378a: Status 404 returned error can't find the container with id c48448768e0920a7172ea039a758ddf38b7de697fec69e89e370bf91cb52378a Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.869244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d27qp"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.872533 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.873698 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9z5pd"] Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.879776 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982394 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-utilities\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982427 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-catalog-content\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982451 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dfr8\" (UniqueName: \"kubernetes.io/projected/825a6e39-523e-4040-bee6-14b3ed5d2000-kube-api-access-4dfr8\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-catalog-content\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-utilities\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982528 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvmw\" (UniqueName: \"kubernetes.io/projected/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-kube-api-access-cqvmw\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.982556 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:08 crc kubenswrapper[4858]: E1205 13:59:08.982786 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.482775155 +0000 UTC m=+158.030373294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.983148 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-utilities\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:08 crc kubenswrapper[4858]: I1205 13:59:08.983344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-catalog-content\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.002366 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11aed447-46fa-47b9-964f-ee26867aa8e1","Type":"ContainerStarted","Data":"2a6443eaae92c557d0fe1f986a5f9eb724c4577301d59f1d781e9b6c4dceb147"} Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.004346 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3dc8df9-662d-49a6-a604-ee0294519e50","Type":"ContainerStarted","Data":"c48448768e0920a7172ea039a758ddf38b7de697fec69e89e370bf91cb52378a"} Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.012184 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9z5pd"] Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.070527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"d760f1907015344ed2e0efca3663bcf05625742bc6123f022ebcd1dbf3de9ef2"} Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.072593 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dfr8\" (UniqueName: \"kubernetes.io/projected/825a6e39-523e-4040-bee6-14b3ed5d2000-kube-api-access-4dfr8\") pod \"certified-operators-d27qp\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.076976 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.083418 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.085399 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.585379915 +0000 UTC m=+158.132978054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.085602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvmw\" (UniqueName: \"kubernetes.io/projected/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-kube-api-access-cqvmw\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.085654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.085740 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-utilities\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.085765 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-catalog-content\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.086267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-catalog-content\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.086816 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.586807784 +0000 UTC m=+158.134405923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.087184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-utilities\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.095623 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m96p9" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.159294 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5c95q" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.176986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvmw\" (UniqueName: \"kubernetes.io/projected/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-kube-api-access-cqvmw\") pod \"community-operators-9z5pd\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.191376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.192342 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.692324694 +0000 UTC m=+158.239922833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.214620 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.214603716 podStartE2EDuration="3.214603716s" podCreationTimestamp="2025-12-05 13:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:09.146050482 +0000 UTC m=+157.693648621" watchObservedRunningTime="2025-12-05 13:59:09.214603716 +0000 UTC m=+157.762201855" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.295630 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.296768 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.796756493 +0000 UTC m=+158.344354632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.299654 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9z5pd" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.325338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d27qp" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.326232 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podStartSLOduration=19.326212942 podStartE2EDuration="19.326212942s" podCreationTimestamp="2025-12-05 13:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:09.318488671 +0000 UTC m=+157.866086810" watchObservedRunningTime="2025-12-05 13:59:09.326212942 +0000 UTC m=+157.873811081" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.397639 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.398031 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:09.898016016 +0000 UTC m=+158.445614155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.503865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.504207 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.004194423 +0000 UTC m=+158.551792562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.605262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.605628 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.10560601 +0000 UTC m=+158.653204149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.669647 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:09 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:09 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:09 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.670158 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.706558 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.706882 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.206869062 +0000 UTC m=+158.754467201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.810437 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.811022 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.311004395 +0000 UTC m=+158.858602534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:09 crc kubenswrapper[4858]: I1205 13:59:09.912467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:09 crc kubenswrapper[4858]: E1205 13:59:09.912847 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.412819812 +0000 UTC m=+158.960417951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.013116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.013251 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.513225941 +0000 UTC m=+159.060824080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.013382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.013652 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.513641142 +0000 UTC m=+159.061239281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.076623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3dc8df9-662d-49a6-a604-ee0294519e50","Type":"ContainerStarted","Data":"b25bd48aeb66c0f03e10504ab13ddca623c07a4711e189b6fced99112d102184"} Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.100788 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.100773427 podStartE2EDuration="4.100773427s" podCreationTimestamp="2025-12-05 13:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:10.099846441 +0000 UTC m=+158.647444580" watchObservedRunningTime="2025-12-05 13:59:10.100773427 +0000 UTC m=+158.648371566" Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.114711 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.114936 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.614920376 +0000 UTC m=+159.162518515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.115134 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.115415 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.615407459 +0000 UTC m=+159.163005598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.139652 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": dial tcp 10.217.0.42:9898: connect: connection refused" Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.216241 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.217151 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.717134454 +0000 UTC m=+159.264732593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.319695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.320062 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.820045992 +0000 UTC m=+159.367644131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.421606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.421873 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.9218527 +0000 UTC m=+159.469450839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.421989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.422328 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:10.922315922 +0000 UTC m=+159.469914061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.522742 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.022712141 +0000 UTC m=+159.570310280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.522780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.523010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.523300 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.023289826 +0000 UTC m=+159.570887965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.624484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.625155 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.125135165 +0000 UTC m=+159.672733304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.670381 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:10 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:10 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:10 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.670430 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.753525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.753882 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.253867993 +0000 UTC m=+159.801466132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.854925 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.855150 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.355133805 +0000 UTC m=+159.902731944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.895296 4858 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 05 13:59:10 crc kubenswrapper[4858]: I1205 13:59:10.956333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:10 crc kubenswrapper[4858]: E1205 13:59:10.956634 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.456621645 +0000 UTC m=+160.004219784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.035666 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fm4gw"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.036782 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.058365 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.059017 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:11 crc kubenswrapper[4858]: E1205 13:59:11.059479 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.55946014 +0000 UTC m=+160.107058279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.081286 4858 generic.go:334] "Generic (PLEG): container finished" podID="11aed447-46fa-47b9-964f-ee26867aa8e1" containerID="2a6443eaae92c557d0fe1f986a5f9eb724c4577301d59f1d781e9b6c4dceb147" exitCode=0 Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.081347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11aed447-46fa-47b9-964f-ee26867aa8e1","Type":"ContainerDied","Data":"2a6443eaae92c557d0fe1f986a5f9eb724c4577301d59f1d781e9b6c4dceb147"} Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.082299 4858 generic.go:334] "Generic (PLEG): container finished" podID="c3dc8df9-662d-49a6-a604-ee0294519e50" containerID="b25bd48aeb66c0f03e10504ab13ddca623c07a4711e189b6fced99112d102184" exitCode=0 Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.082322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3dc8df9-662d-49a6-a604-ee0294519e50","Type":"ContainerDied","Data":"b25bd48aeb66c0f03e10504ab13ddca623c07a4711e189b6fced99112d102184"} Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.094684 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.138777 4858 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-05T13:59:10.89531734Z","Handler":null,"Name":""} Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.160690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24sn\" (UniqueName: \"kubernetes.io/projected/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-kube-api-access-p24sn\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.160757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-catalog-content\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.160817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-utilities\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.160855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:11 crc kubenswrapper[4858]: E1205 13:59:11.161118 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 13:59:11.661106293 +0000 UTC m=+160.208704432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-trcq9" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.174886 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fm4gw"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.193433 4858 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.193469 4858 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.264338 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.264635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-utilities\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.264696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24sn\" (UniqueName: \"kubernetes.io/projected/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-kube-api-access-p24sn\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.264733 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-catalog-content\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.265264 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-catalog-content\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.265966 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-utilities\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.334707 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-md2f2"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.335590 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: W1205 13:59:11.369972 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3175524c_136d_44a0_9324_0d063376c05f.slice/crio-9b7db6ea698937f4e9541dea2ebd492a79d5e0116b3d6ba953ac7439afd9a8c1 WatchSource:0}: Error finding container 9b7db6ea698937f4e9541dea2ebd492a79d5e0116b3d6ba953ac7439afd9a8c1: Status 404 returned error can't find the container with id 9b7db6ea698937f4e9541dea2ebd492a79d5e0116b3d6ba953ac7439afd9a8c1 Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.421410 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2g9nd"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.452910 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24sn\" (UniqueName: \"kubernetes.io/projected/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-kube-api-access-p24sn\") pod \"redhat-marketplace-fm4gw\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.453466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-md2f2"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.470998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-utilities\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.471051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-catalog-content\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.471080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqxz\" (UniqueName: \"kubernetes.io/projected/14af5b55-95bb-4d81-a390-3cbdc232f270-kube-api-access-7rqxz\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.503923 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.525230 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j9hq9"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.526226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.536142 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.572031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-utilities\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.572077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-catalog-content\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.572106 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqxz\" (UniqueName: \"kubernetes.io/projected/14af5b55-95bb-4d81-a390-3cbdc232f270-kube-api-access-7rqxz\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.572169 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.573634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-catalog-content\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.573949 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-utilities\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.588584 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9z5pd"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.609608 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9hq9"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.626325 4858 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.626371 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.673651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-utilities\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.673744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqx6p\" (UniqueName: \"kubernetes.io/projected/792eaec2-2c9f-487c-ab4b-437fa7897bee-kube-api-access-kqx6p\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.673785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-catalog-content\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.673954 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:11 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:11 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:11 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.673985 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.680379 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.696865 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqxz\" (UniqueName: \"kubernetes.io/projected/14af5b55-95bb-4d81-a390-3cbdc232f270-kube-api-access-7rqxz\") pod \"redhat-marketplace-md2f2\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.702440 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dst87"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.713447 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.762990 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dst87"] Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.774599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-catalog-content\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.774883 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-utilities\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.774940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqx6p\" (UniqueName: \"kubernetes.io/projected/792eaec2-2c9f-487c-ab4b-437fa7897bee-kube-api-access-kqx6p\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.776303 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-utilities\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.793138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-catalog-content\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.833630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqx6p\" (UniqueName: \"kubernetes.io/projected/792eaec2-2c9f-487c-ab4b-437fa7897bee-kube-api-access-kqx6p\") pod \"redhat-operators-j9hq9\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.860154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.884743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-utilities\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.884844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzzf6\" (UniqueName: \"kubernetes.io/projected/a39fea16-b688-40d4-8077-1bbd6d653cf4-kube-api-access-wzzf6\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.884885 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-catalog-content\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.938232 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.954465 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.981552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-trcq9\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.987742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzzf6\" (UniqueName: \"kubernetes.io/projected/a39fea16-b688-40d4-8077-1bbd6d653cf4-kube-api-access-wzzf6\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.987799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-catalog-content\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.987865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-utilities\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.988317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-utilities\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:11 crc kubenswrapper[4858]: I1205 13:59:11.988528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-catalog-content\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.022701 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6cpxg"] Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.026305 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d27qp"] Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.048929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzzf6\" (UniqueName: \"kubernetes.io/projected/a39fea16-b688-40d4-8077-1bbd6d653cf4-kube-api-access-wzzf6\") pod \"redhat-operators-dst87\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.091295 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dst87" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.213171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerStarted","Data":"00ca6c075835fdccec329c95fa7bf48525735894c7ad4320d1ed40f3e216af43"} Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.213209 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerStarted","Data":"9b7db6ea698937f4e9541dea2ebd492a79d5e0116b3d6ba953ac7439afd9a8c1"} Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.237039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9z5pd" event={"ID":"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b","Type":"ContainerStarted","Data":"8eeee7f2008efd2c7e0c091f4ba9cba1606c0aaa1c2d4ef98e5f41cdd1c3d4a4"} Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.238628 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.238655 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.243776 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.243809 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.247155 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.263027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cpxg" event={"ID":"e65f2d84-01e5-440d-b92c-79227561f3c0","Type":"ContainerStarted","Data":"4d2669b2ffb24a7eaa5f5833534c276a103fbc0bfe1789f33358d957bb1d2131"} Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.320259 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.330416 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d27qp" event={"ID":"825a6e39-523e-4040-bee6-14b3ed5d2000","Type":"ContainerStarted","Data":"2d458bbc13a730a08a6c7e9830d234c955015a5ff4460c2b51906b61e6a1ed3d"} Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.353198 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.447621 4858 patch_prober.go:28] interesting pod/apiserver-76f77b778f-c7tvn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]log ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]etcd ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/generic-apiserver-start-informers ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/max-in-flight-filter ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 05 13:59:12 crc kubenswrapper[4858]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 05 13:59:12 crc kubenswrapper[4858]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/project.openshift.io-projectcache ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/openshift.io-startinformers ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 05 13:59:12 crc kubenswrapper[4858]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 05 13:59:12 crc kubenswrapper[4858]: livez check failed Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.447685 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" podUID="5f47f6b4-2307-4660-b7d6-61a604ee2a81" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.448236 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.672302 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:12 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:12 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:12 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:12 crc kubenswrapper[4858]: I1205 13:59:12.672579 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.175508 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fm4gw"] Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.367577 4858 generic.go:334] "Generic (PLEG): container finished" podID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerID="c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f" exitCode=0 Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.367931 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d27qp" event={"ID":"825a6e39-523e-4040-bee6-14b3ed5d2000","Type":"ContainerDied","Data":"c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f"} Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.383806 4858 generic.go:334] "Generic (PLEG): container finished" podID="3175524c-136d-44a0-9324-0d063376c05f" containerID="00ca6c075835fdccec329c95fa7bf48525735894c7ad4320d1ed40f3e216af43" exitCode=0 Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.383883 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerDied","Data":"00ca6c075835fdccec329c95fa7bf48525735894c7ad4320d1ed40f3e216af43"} Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.409486 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fm4gw" event={"ID":"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c","Type":"ContainerStarted","Data":"feee159d093c938f6e5d732c08b9a1856618e45144e36e38fdc5a016f2f50469"} Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.469329 4858 generic.go:334] "Generic (PLEG): container finished" podID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerID="fe6002cd1fb9790fe773cc78d90d919ff60f2e88e143c56c45bbeccf997ea5dc" exitCode=0 Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.469673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9z5pd" event={"ID":"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b","Type":"ContainerDied","Data":"fe6002cd1fb9790fe773cc78d90d919ff60f2e88e143c56c45bbeccf997ea5dc"} Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.470644 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j9hq9"] Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.527310 4858 generic.go:334] "Generic (PLEG): container finished" podID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerID="9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3" exitCode=0 Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.527349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cpxg" event={"ID":"e65f2d84-01e5-440d-b92c-79227561f3c0","Type":"ContainerDied","Data":"9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3"} Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.543313 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.644431 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11aed447-46fa-47b9-964f-ee26867aa8e1-kube-api-access\") pod \"11aed447-46fa-47b9-964f-ee26867aa8e1\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.644504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11aed447-46fa-47b9-964f-ee26867aa8e1-kubelet-dir\") pod \"11aed447-46fa-47b9-964f-ee26867aa8e1\" (UID: \"11aed447-46fa-47b9-964f-ee26867aa8e1\") " Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.671289 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11aed447-46fa-47b9-964f-ee26867aa8e1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11aed447-46fa-47b9-964f-ee26867aa8e1" (UID: "11aed447-46fa-47b9-964f-ee26867aa8e1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.680338 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:13 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:13 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:13 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.680405 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.696234 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.712096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11aed447-46fa-47b9-964f-ee26867aa8e1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11aed447-46fa-47b9-964f-ee26867aa8e1" (UID: "11aed447-46fa-47b9-964f-ee26867aa8e1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.748885 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11aed447-46fa-47b9-964f-ee26867aa8e1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.748923 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11aed447-46fa-47b9-964f-ee26867aa8e1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.769440 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dst87"] Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.849431 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dc8df9-662d-49a6-a604-ee0294519e50-kube-api-access\") pod \"c3dc8df9-662d-49a6-a604-ee0294519e50\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.849777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3dc8df9-662d-49a6-a604-ee0294519e50-kubelet-dir\") pod \"c3dc8df9-662d-49a6-a604-ee0294519e50\" (UID: \"c3dc8df9-662d-49a6-a604-ee0294519e50\") " Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.850092 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3dc8df9-662d-49a6-a604-ee0294519e50-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c3dc8df9-662d-49a6-a604-ee0294519e50" (UID: "c3dc8df9-662d-49a6-a604-ee0294519e50"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.877079 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3dc8df9-662d-49a6-a604-ee0294519e50-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c3dc8df9-662d-49a6-a604-ee0294519e50" (UID: "c3dc8df9-662d-49a6-a604-ee0294519e50"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.895446 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-md2f2"] Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.927843 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-trcq9"] Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.951710 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dc8df9-662d-49a6-a604-ee0294519e50-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:13 crc kubenswrapper[4858]: I1205 13:59:13.951925 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3dc8df9-662d-49a6-a604-ee0294519e50-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.031234 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.067796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.099251 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.226292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.240209 4858 patch_prober.go:28] interesting pod/console-f9d7485db-x25gp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.240259 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x25gp" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.375065 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.560402 4858 generic.go:334] "Generic (PLEG): container finished" podID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerID="6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310" exitCode=0 Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.560493 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerDied","Data":"6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.560767 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerStarted","Data":"a74719a0dce15f69efd7a282fd33c7a42ded7a22e4abd49299fac28d26596c2f"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.576592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.577074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3dc8df9-662d-49a6-a604-ee0294519e50","Type":"ContainerDied","Data":"c48448768e0920a7172ea039a758ddf38b7de697fec69e89e370bf91cb52378a"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.577126 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c48448768e0920a7172ea039a758ddf38b7de697fec69e89e370bf91cb52378a" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.592982 4858 generic.go:334] "Generic (PLEG): container finished" podID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerID="1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64" exitCode=0 Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.593274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-md2f2" event={"ID":"14af5b55-95bb-4d81-a390-3cbdc232f270","Type":"ContainerDied","Data":"1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.593332 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-md2f2" event={"ID":"14af5b55-95bb-4d81-a390-3cbdc232f270","Type":"ContainerStarted","Data":"679c7e436fa96de73325fac01061e617b285e87c241e5fe0c0aea94adf4d2337"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.601351 4858 generic.go:334] "Generic (PLEG): container finished" podID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerID="7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477" exitCode=0 Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.601563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerDied","Data":"7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.601611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerStarted","Data":"7c77c23d96630b5af9b9c8f210a66b41d557e1cc828d1c5faa1b37ffbc8dff77"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.606814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" event={"ID":"17d98864-f8cf-4f61-9707-30871521a9f2","Type":"ContainerStarted","Data":"370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.606913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" event={"ID":"17d98864-f8cf-4f61-9707-30871521a9f2","Type":"ContainerStarted","Data":"d4b64f1f9d37d93846495fc1d90e0b7576d44d87f0e2b855e10a628b9899a418"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.607442 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.624072 4858 generic.go:334] "Generic (PLEG): container finished" podID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerID="e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4" exitCode=0 Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.624143 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fm4gw" event={"ID":"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c","Type":"ContainerDied","Data":"e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.631424 4858 generic.go:334] "Generic (PLEG): container finished" podID="cedb2565-0837-4473-89e6-84269d6e3766" containerID="c2104c72b6990c443ed3bc7434b5b7ccc9fcc3df8306832fa903138c5327e226" exitCode=0 Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.631527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" event={"ID":"cedb2565-0837-4473-89e6-84269d6e3766","Type":"ContainerDied","Data":"c2104c72b6990c443ed3bc7434b5b7ccc9fcc3df8306832fa903138c5327e226"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.650352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"11aed447-46fa-47b9-964f-ee26867aa8e1","Type":"ContainerDied","Data":"a0e79dba0f651491c1948c716e51a9a6aa9704139ae415ae8cb1c7f31a0675a2"} Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.650473 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e79dba0f651491c1948c716e51a9a6aa9704139ae415ae8cb1c7f31a0675a2" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.650680 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.666144 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" podStartSLOduration=142.666126887 podStartE2EDuration="2m22.666126887s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:14.657627163 +0000 UTC m=+163.205225302" watchObservedRunningTime="2025-12-05 13:59:14.666126887 +0000 UTC m=+163.213725026" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.667365 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:14 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:14 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:14 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.667442 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.760338 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 13:59:14 crc kubenswrapper[4858]: I1205 13:59:14.760422 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 13:59:15 crc kubenswrapper[4858]: I1205 13:59:15.500940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:59:15 crc kubenswrapper[4858]: I1205 13:59:15.506726 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6197c8ee-275b-44dd-b402-e4b8039c4997-metrics-certs\") pod \"network-metrics-daemon-5jh87\" (UID: \"6197c8ee-275b-44dd-b402-e4b8039c4997\") " pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:59:15 crc kubenswrapper[4858]: I1205 13:59:15.617817 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5jh87" Dec 05 13:59:15 crc kubenswrapper[4858]: I1205 13:59:15.666164 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:15 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:15 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:15 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:15 crc kubenswrapper[4858]: I1205 13:59:15.666206 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.396371 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.461950 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5jh87"] Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.527893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cedb2565-0837-4473-89e6-84269d6e3766-config-volume\") pod \"cedb2565-0837-4473-89e6-84269d6e3766\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.527955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cedb2565-0837-4473-89e6-84269d6e3766-secret-volume\") pod \"cedb2565-0837-4473-89e6-84269d6e3766\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.528012 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf7b8\" (UniqueName: \"kubernetes.io/projected/cedb2565-0837-4473-89e6-84269d6e3766-kube-api-access-wf7b8\") pod \"cedb2565-0837-4473-89e6-84269d6e3766\" (UID: \"cedb2565-0837-4473-89e6-84269d6e3766\") " Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.528596 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cedb2565-0837-4473-89e6-84269d6e3766-config-volume" (OuterVolumeSpecName: "config-volume") pod "cedb2565-0837-4473-89e6-84269d6e3766" (UID: "cedb2565-0837-4473-89e6-84269d6e3766"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.548461 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedb2565-0837-4473-89e6-84269d6e3766-kube-api-access-wf7b8" (OuterVolumeSpecName: "kube-api-access-wf7b8") pod "cedb2565-0837-4473-89e6-84269d6e3766" (UID: "cedb2565-0837-4473-89e6-84269d6e3766"). InnerVolumeSpecName "kube-api-access-wf7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.548894 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cedb2565-0837-4473-89e6-84269d6e3766-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cedb2565-0837-4473-89e6-84269d6e3766" (UID: "cedb2565-0837-4473-89e6-84269d6e3766"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.630838 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cedb2565-0837-4473-89e6-84269d6e3766-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.630875 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cedb2565-0837-4473-89e6-84269d6e3766-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.630896 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf7b8\" (UniqueName: \"kubernetes.io/projected/cedb2565-0837-4473-89e6-84269d6e3766-kube-api-access-wf7b8\") on node \"crc\" DevicePath \"\"" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.667519 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:16 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:16 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:16 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.667576 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.798349 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5jh87" event={"ID":"6197c8ee-275b-44dd-b402-e4b8039c4997","Type":"ContainerStarted","Data":"3c7359f5a626d3087c6bcafe3cbd362666f883fcd6c0ed23cd5ce1ad1db8b567"} Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.817454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" event={"ID":"cedb2565-0837-4473-89e6-84269d6e3766","Type":"ContainerDied","Data":"d22916b90c9eedea967d813bfee1f1c44bcd69a6c8635410fb214f113a0957ae"} Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.817498 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d22916b90c9eedea967d813bfee1f1c44bcd69a6c8635410fb214f113a0957ae" Dec 05 13:59:16 crc kubenswrapper[4858]: I1205 13:59:16.817433 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb" Dec 05 13:59:17 crc kubenswrapper[4858]: I1205 13:59:17.435797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:59:17 crc kubenswrapper[4858]: I1205 13:59:17.441431 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-c7tvn" Dec 05 13:59:17 crc kubenswrapper[4858]: I1205 13:59:17.666390 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 13:59:17 crc kubenswrapper[4858]: [-]has-synced failed: reason withheld Dec 05 13:59:17 crc kubenswrapper[4858]: [+]process-running ok Dec 05 13:59:17 crc kubenswrapper[4858]: healthz check failed Dec 05 13:59:17 crc kubenswrapper[4858]: I1205 13:59:17.666679 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 13:59:18 crc kubenswrapper[4858]: I1205 13:59:18.679554 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:59:18 crc kubenswrapper[4858]: I1205 13:59:18.682926 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kmzj6" Dec 05 13:59:18 crc kubenswrapper[4858]: I1205 13:59:18.872139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5jh87" event={"ID":"6197c8ee-275b-44dd-b402-e4b8039c4997","Type":"ContainerStarted","Data":"07894f9f0ab41eb4021141d2744c57a8a6c646a1294eed703d7c37550eeb7f7b"} Dec 05 13:59:19 crc kubenswrapper[4858]: I1205 13:59:19.935787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5jh87" event={"ID":"6197c8ee-275b-44dd-b402-e4b8039c4997","Type":"ContainerStarted","Data":"17ba68a1a190010836e3b4896cd6181c5ad5aa20361a0ce40e0f45ab8cc3f632"} Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.241300 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.241384 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.241379 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.241436 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.241489 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.242134 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"fb9c94c0c7484fe505f50c792f8ec6fe59892a80c6a7ead93e4de58c736eb285"} pod="openshift-console/downloads-7954f5f757-rzsvl" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.242224 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" containerID="cri-o://fb9c94c0c7484fe505f50c792f8ec6fe59892a80c6a7ead93e4de58c736eb285" gracePeriod=2 Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.243104 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:22 crc kubenswrapper[4858]: I1205 13:59:22.243133 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:23 crc kubenswrapper[4858]: E1205 13:59:23.073758 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2db6d150_e5c9_41b2_9289_2f6ee74c648b.slice/crio-conmon-fb9c94c0c7484fe505f50c792f8ec6fe59892a80c6a7ead93e4de58c736eb285.scope\": RecentStats: unable to find data in memory cache]" Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.035457 4858 generic.go:334] "Generic (PLEG): container finished" podID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerID="fb9c94c0c7484fe505f50c792f8ec6fe59892a80c6a7ead93e4de58c736eb285" exitCode=0 Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.035531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rzsvl" event={"ID":"2db6d150-e5c9-41b2-9289-2f6ee74c648b","Type":"ContainerDied","Data":"fb9c94c0c7484fe505f50c792f8ec6fe59892a80c6a7ead93e4de58c736eb285"} Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.035745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-rzsvl" event={"ID":"2db6d150-e5c9-41b2-9289-2f6ee74c648b","Type":"ContainerStarted","Data":"3878b952cb1d1def95693a7e754f6a4292aba2db47be7102ee11add1d212c70b"} Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.036725 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.036783 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.056132 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5jh87" podStartSLOduration=152.056113221 podStartE2EDuration="2m32.056113221s" podCreationTimestamp="2025-12-05 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 13:59:19.965509347 +0000 UTC m=+168.513107486" watchObservedRunningTime="2025-12-05 13:59:24.056113221 +0000 UTC m=+172.603711360" Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.275050 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:59:24 crc kubenswrapper[4858]: I1205 13:59:24.279456 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.238193 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.238831 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.238886 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.239750 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.239777 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.240168 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.240197 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:32 crc kubenswrapper[4858]: I1205 13:59:32.255596 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 13:59:34 crc kubenswrapper[4858]: I1205 13:59:34.070371 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" Dec 05 13:59:40 crc kubenswrapper[4858]: I1205 13:59:40.179756 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 13:59:42 crc kubenswrapper[4858]: I1205 13:59:42.238706 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:42 crc kubenswrapper[4858]: I1205 13:59:42.239063 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:42 crc kubenswrapper[4858]: I1205 13:59:42.238721 4858 patch_prober.go:28] interesting pod/downloads-7954f5f757-rzsvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Dec 05 13:59:42 crc kubenswrapper[4858]: I1205 13:59:42.239168 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-rzsvl" podUID="2db6d150-e5c9-41b2-9289-2f6ee74c648b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Dec 05 13:59:44 crc kubenswrapper[4858]: I1205 13:59:44.760270 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 13:59:44 crc kubenswrapper[4858]: I1205 13:59:44.760605 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308334 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 13:59:46 crc kubenswrapper[4858]: E1205 13:59:46.308574 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dc8df9-662d-49a6-a604-ee0294519e50" containerName="pruner" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308589 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dc8df9-662d-49a6-a604-ee0294519e50" containerName="pruner" Dec 05 13:59:46 crc kubenswrapper[4858]: E1205 13:59:46.308605 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedb2565-0837-4473-89e6-84269d6e3766" containerName="collect-profiles" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308610 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedb2565-0837-4473-89e6-84269d6e3766" containerName="collect-profiles" Dec 05 13:59:46 crc kubenswrapper[4858]: E1205 13:59:46.308619 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11aed447-46fa-47b9-964f-ee26867aa8e1" containerName="pruner" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308625 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11aed447-46fa-47b9-964f-ee26867aa8e1" containerName="pruner" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308721 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3dc8df9-662d-49a6-a604-ee0294519e50" containerName="pruner" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308728 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedb2565-0837-4473-89e6-84269d6e3766" containerName="collect-profiles" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.308737 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11aed447-46fa-47b9-964f-ee26867aa8e1" containerName="pruner" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.309100 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.313020 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.315399 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.322341 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.496294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7bdc07-1638-401a-8307-bd51882dc651-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.496372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7bdc07-1638-401a-8307-bd51882dc651-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.597351 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7bdc07-1638-401a-8307-bd51882dc651-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.597458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7bdc07-1638-401a-8307-bd51882dc651-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.597541 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7bdc07-1638-401a-8307-bd51882dc651-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.622085 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7bdc07-1638-401a-8307-bd51882dc651-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:46 crc kubenswrapper[4858]: I1205 13:59:46.638402 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.322381 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.323917 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.329706 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.461228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-kubelet-dir\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.461277 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df3eb38e-7204-4116-9870-a256348a5034-kube-api-access\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.461306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-var-lock\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.562904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-kubelet-dir\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.562963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df3eb38e-7204-4116-9870-a256348a5034-kube-api-access\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.563006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-var-lock\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.563014 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-kubelet-dir\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.563186 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-var-lock\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.598398 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df3eb38e-7204-4116-9870-a256348a5034-kube-api-access\") pod \"installer-9-crc\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:51 crc kubenswrapper[4858]: I1205 13:59:51.665357 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 13:59:52 crc kubenswrapper[4858]: I1205 13:59:52.248521 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-rzsvl" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.137982 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6"] Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.140613 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.145084 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6"] Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.147299 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.147698 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.278484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a5b8ed5-1641-4428-8fff-05deab84fe14-secret-volume\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.278958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a5b8ed5-1641-4428-8fff-05deab84fe14-config-volume\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.279088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsmq5\" (UniqueName: \"kubernetes.io/projected/0a5b8ed5-1641-4428-8fff-05deab84fe14-kube-api-access-hsmq5\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.380196 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a5b8ed5-1641-4428-8fff-05deab84fe14-secret-volume\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.380498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a5b8ed5-1641-4428-8fff-05deab84fe14-config-volume\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.380635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsmq5\" (UniqueName: \"kubernetes.io/projected/0a5b8ed5-1641-4428-8fff-05deab84fe14-kube-api-access-hsmq5\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.381352 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a5b8ed5-1641-4428-8fff-05deab84fe14-config-volume\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.386742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a5b8ed5-1641-4428-8fff-05deab84fe14-secret-volume\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.402066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsmq5\" (UniqueName: \"kubernetes.io/projected/0a5b8ed5-1641-4428-8fff-05deab84fe14-kube-api-access-hsmq5\") pod \"collect-profiles-29415720-fnqq6\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:00 crc kubenswrapper[4858]: I1205 14:00:00.466091 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:04 crc kubenswrapper[4858]: E1205 14:00:04.047267 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 14:00:04 crc kubenswrapper[4858]: E1205 14:00:04.047979 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdwpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2g9nd_openshift-marketplace(3175524c-136d-44a0-9324-0d063376c05f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:04 crc kubenswrapper[4858]: E1205 14:00:04.049923 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2g9nd" podUID="3175524c-136d-44a0-9324-0d063376c05f" Dec 05 14:00:09 crc kubenswrapper[4858]: E1205 14:00:09.680051 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2g9nd" podUID="3175524c-136d-44a0-9324-0d063376c05f" Dec 05 14:00:11 crc kubenswrapper[4858]: E1205 14:00:11.441871 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 14:00:11 crc kubenswrapper[4858]: E1205 14:00:11.442215 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vkc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6cpxg_openshift-marketplace(e65f2d84-01e5-440d-b92c-79227561f3c0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:11 crc kubenswrapper[4858]: E1205 14:00:11.443660 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6cpxg" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" Dec 05 14:00:11 crc kubenswrapper[4858]: E1205 14:00:11.477702 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 05 14:00:11 crc kubenswrapper[4858]: E1205 14:00:11.478108 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqvmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9z5pd_openshift-marketplace(d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:11 crc kubenswrapper[4858]: E1205 14:00:11.479242 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9z5pd" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.381298 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.381410 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rqxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-md2f2_openshift-marketplace(14af5b55-95bb-4d81-a390-3cbdc232f270): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.382686 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-md2f2" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.394765 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.394932 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dfr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-d27qp_openshift-marketplace(825a6e39-523e-4040-bee6-14b3ed5d2000): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.396236 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-d27qp" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.404218 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.404346 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p24sn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fm4gw_openshift-marketplace(02abc0e5-f9e1-41de-bb1c-40bd94b29f1c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:12 crc kubenswrapper[4858]: E1205 14:00:12.405537 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fm4gw" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" Dec 05 14:00:14 crc kubenswrapper[4858]: I1205 14:00:14.760472 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:00:14 crc kubenswrapper[4858]: I1205 14:00:14.760747 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:00:14 crc kubenswrapper[4858]: I1205 14:00:14.760786 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:00:14 crc kubenswrapper[4858]: I1205 14:00:14.761366 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:00:14 crc kubenswrapper[4858]: I1205 14:00:14.761417 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf" gracePeriod=600 Dec 05 14:00:15 crc kubenswrapper[4858]: I1205 14:00:15.433067 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf" exitCode=0 Dec 05 14:00:15 crc kubenswrapper[4858]: I1205 14:00:15.433126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf"} Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.507312 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fm4gw" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.507578 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6cpxg" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.507646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-md2f2" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.507679 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-d27qp" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.507713 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9z5pd" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.594276 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.594859 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqx6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j9hq9_openshift-marketplace(792eaec2-2c9f-487c-ab4b-437fa7897bee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.596204 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j9hq9" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.606866 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.607012 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzzf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dst87_openshift-marketplace(a39fea16-b688-40d4-8077-1bbd6d653cf4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:00:15 crc kubenswrapper[4858]: E1205 14:00:15.608931 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dst87" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" Dec 05 14:00:15 crc kubenswrapper[4858]: I1205 14:00:15.997109 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6"] Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.057060 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 14:00:16 crc kubenswrapper[4858]: W1205 14:00:16.072522 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9a7bdc07_1638_401a_8307_bd51882dc651.slice/crio-7c51e117a75db57a793fa3c96b1da6e4a256f3ffa26f1bcfe9be4c5324e9faed WatchSource:0}: Error finding container 7c51e117a75db57a793fa3c96b1da6e4a256f3ffa26f1bcfe9be4c5324e9faed: Status 404 returned error can't find the container with id 7c51e117a75db57a793fa3c96b1da6e4a256f3ffa26f1bcfe9be4c5324e9faed Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.095256 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 14:00:16 crc kubenswrapper[4858]: W1205 14:00:16.115250 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddf3eb38e_7204_4116_9870_a256348a5034.slice/crio-073b167f42065ed8c55c799eee7dec9b49a8ab117c00afb28c5e22ca1afb5f27 WatchSource:0}: Error finding container 073b167f42065ed8c55c799eee7dec9b49a8ab117c00afb28c5e22ca1afb5f27: Status 404 returned error can't find the container with id 073b167f42065ed8c55c799eee7dec9b49a8ab117c00afb28c5e22ca1afb5f27 Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.438790 4858 generic.go:334] "Generic (PLEG): container finished" podID="0a5b8ed5-1641-4428-8fff-05deab84fe14" containerID="0e0ae0af0999967084d2efaeef15f83e57ad62a62a536eafae921ac7df148a6a" exitCode=0 Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.438882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" event={"ID":"0a5b8ed5-1641-4428-8fff-05deab84fe14","Type":"ContainerDied","Data":"0e0ae0af0999967084d2efaeef15f83e57ad62a62a536eafae921ac7df148a6a"} Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.438965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" event={"ID":"0a5b8ed5-1641-4428-8fff-05deab84fe14","Type":"ContainerStarted","Data":"e4dfe07851c2724e54c981a0827daff78837bfb407afbd12b2f69714f4923e9e"} Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.441509 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"df3eb38e-7204-4116-9870-a256348a5034","Type":"ContainerStarted","Data":"659d43f359c7bd659182c364d67154b583e89c0d044e8247227818230e78d2e5"} Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.441539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"df3eb38e-7204-4116-9870-a256348a5034","Type":"ContainerStarted","Data":"073b167f42065ed8c55c799eee7dec9b49a8ab117c00afb28c5e22ca1afb5f27"} Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.444317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"3ab1fc1ade15987d254249f652eeb63b38a39486edb0297f61ed8eaf801d6fa5"} Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.446235 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a7bdc07-1638-401a-8307-bd51882dc651","Type":"ContainerStarted","Data":"c600ac6f4b4e0c16135fe2dafca8bd7a86e985408de705052c96865dc353913c"} Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.446305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a7bdc07-1638-401a-8307-bd51882dc651","Type":"ContainerStarted","Data":"7c51e117a75db57a793fa3c96b1da6e4a256f3ffa26f1bcfe9be4c5324e9faed"} Dec 05 14:00:16 crc kubenswrapper[4858]: E1205 14:00:16.450056 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dst87" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" Dec 05 14:00:16 crc kubenswrapper[4858]: E1205 14:00:16.450151 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j9hq9" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.473091 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=30.473072431 podStartE2EDuration="30.473072431s" podCreationTimestamp="2025-12-05 13:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:00:16.469098261 +0000 UTC m=+225.016696420" watchObservedRunningTime="2025-12-05 14:00:16.473072431 +0000 UTC m=+225.020670570" Dec 05 14:00:16 crc kubenswrapper[4858]: I1205 14:00:16.537051 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=25.537030326 podStartE2EDuration="25.537030326s" podCreationTimestamp="2025-12-05 13:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:00:16.536746778 +0000 UTC m=+225.084344927" watchObservedRunningTime="2025-12-05 14:00:16.537030326 +0000 UTC m=+225.084628465" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.454041 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a7bdc07-1638-401a-8307-bd51882dc651" containerID="c600ac6f4b4e0c16135fe2dafca8bd7a86e985408de705052c96865dc353913c" exitCode=0 Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.454132 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a7bdc07-1638-401a-8307-bd51882dc651","Type":"ContainerDied","Data":"c600ac6f4b4e0c16135fe2dafca8bd7a86e985408de705052c96865dc353913c"} Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.708072 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.821344 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a5b8ed5-1641-4428-8fff-05deab84fe14-config-volume\") pod \"0a5b8ed5-1641-4428-8fff-05deab84fe14\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.821429 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsmq5\" (UniqueName: \"kubernetes.io/projected/0a5b8ed5-1641-4428-8fff-05deab84fe14-kube-api-access-hsmq5\") pod \"0a5b8ed5-1641-4428-8fff-05deab84fe14\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.821494 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a5b8ed5-1641-4428-8fff-05deab84fe14-secret-volume\") pod \"0a5b8ed5-1641-4428-8fff-05deab84fe14\" (UID: \"0a5b8ed5-1641-4428-8fff-05deab84fe14\") " Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.822378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5b8ed5-1641-4428-8fff-05deab84fe14-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a5b8ed5-1641-4428-8fff-05deab84fe14" (UID: "0a5b8ed5-1641-4428-8fff-05deab84fe14"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.830055 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5b8ed5-1641-4428-8fff-05deab84fe14-kube-api-access-hsmq5" (OuterVolumeSpecName: "kube-api-access-hsmq5") pod "0a5b8ed5-1641-4428-8fff-05deab84fe14" (UID: "0a5b8ed5-1641-4428-8fff-05deab84fe14"). InnerVolumeSpecName "kube-api-access-hsmq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.830165 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a5b8ed5-1641-4428-8fff-05deab84fe14-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a5b8ed5-1641-4428-8fff-05deab84fe14" (UID: "0a5b8ed5-1641-4428-8fff-05deab84fe14"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.922608 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a5b8ed5-1641-4428-8fff-05deab84fe14-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.922639 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a5b8ed5-1641-4428-8fff-05deab84fe14-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:17 crc kubenswrapper[4858]: I1205 14:00:17.922650 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsmq5\" (UniqueName: \"kubernetes.io/projected/0a5b8ed5-1641-4428-8fff-05deab84fe14-kube-api-access-hsmq5\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.461679 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.461667 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6" event={"ID":"0a5b8ed5-1641-4428-8fff-05deab84fe14","Type":"ContainerDied","Data":"e4dfe07851c2724e54c981a0827daff78837bfb407afbd12b2f69714f4923e9e"} Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.461813 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4dfe07851c2724e54c981a0827daff78837bfb407afbd12b2f69714f4923e9e" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.680975 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.834698 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7bdc07-1638-401a-8307-bd51882dc651-kube-api-access\") pod \"9a7bdc07-1638-401a-8307-bd51882dc651\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.834916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7bdc07-1638-401a-8307-bd51882dc651-kubelet-dir\") pod \"9a7bdc07-1638-401a-8307-bd51882dc651\" (UID: \"9a7bdc07-1638-401a-8307-bd51882dc651\") " Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.835286 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a7bdc07-1638-401a-8307-bd51882dc651-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9a7bdc07-1638-401a-8307-bd51882dc651" (UID: "9a7bdc07-1638-401a-8307-bd51882dc651"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.840045 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7bdc07-1638-401a-8307-bd51882dc651-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9a7bdc07-1638-401a-8307-bd51882dc651" (UID: "9a7bdc07-1638-401a-8307-bd51882dc651"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.936301 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9a7bdc07-1638-401a-8307-bd51882dc651-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:18 crc kubenswrapper[4858]: I1205 14:00:18.936363 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a7bdc07-1638-401a-8307-bd51882dc651-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:19 crc kubenswrapper[4858]: I1205 14:00:19.470574 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9a7bdc07-1638-401a-8307-bd51882dc651","Type":"ContainerDied","Data":"7c51e117a75db57a793fa3c96b1da6e4a256f3ffa26f1bcfe9be4c5324e9faed"} Dec 05 14:00:19 crc kubenswrapper[4858]: I1205 14:00:19.470884 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c51e117a75db57a793fa3c96b1da6e4a256f3ffa26f1bcfe9be4c5324e9faed" Dec 05 14:00:19 crc kubenswrapper[4858]: I1205 14:00:19.470657 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 14:00:22 crc kubenswrapper[4858]: I1205 14:00:22.488976 4858 generic.go:334] "Generic (PLEG): container finished" podID="3175524c-136d-44a0-9324-0d063376c05f" containerID="8f8f6535c6c206bab9191e690c4e65a134d1010052ca4b328d6a3ab8a24390b5" exitCode=0 Dec 05 14:00:22 crc kubenswrapper[4858]: I1205 14:00:22.489068 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerDied","Data":"8f8f6535c6c206bab9191e690c4e65a134d1010052ca4b328d6a3ab8a24390b5"} Dec 05 14:00:23 crc kubenswrapper[4858]: I1205 14:00:23.497322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerStarted","Data":"f589952472653e0680c07718c678f3e4e1668558210fb9c178fc324deb13b3f9"} Dec 05 14:00:23 crc kubenswrapper[4858]: I1205 14:00:23.521012 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2g9nd" podStartSLOduration=4.838270603 podStartE2EDuration="1m15.520994748s" podCreationTimestamp="2025-12-05 13:59:08 +0000 UTC" firstStartedPulling="2025-12-05 13:59:12.294787816 +0000 UTC m=+160.842385955" lastFinishedPulling="2025-12-05 14:00:22.977511971 +0000 UTC m=+231.525110100" observedRunningTime="2025-12-05 14:00:23.519032963 +0000 UTC m=+232.066631102" watchObservedRunningTime="2025-12-05 14:00:23.520994748 +0000 UTC m=+232.068592887" Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.522349 4858 generic.go:334] "Generic (PLEG): container finished" podID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerID="e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe" exitCode=0 Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.522448 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-md2f2" event={"ID":"14af5b55-95bb-4d81-a390-3cbdc232f270","Type":"ContainerDied","Data":"e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe"} Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.526633 4858 generic.go:334] "Generic (PLEG): container finished" podID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerID="59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528" exitCode=0 Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.526725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cpxg" event={"ID":"e65f2d84-01e5-440d-b92c-79227561f3c0","Type":"ContainerDied","Data":"59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528"} Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.544461 4858 generic.go:334] "Generic (PLEG): container finished" podID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerID="06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d" exitCode=0 Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.544496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fm4gw" event={"ID":"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c","Type":"ContainerDied","Data":"06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d"} Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.765049 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.765114 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 14:00:28 crc kubenswrapper[4858]: I1205 14:00:28.909810 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 14:00:29 crc kubenswrapper[4858]: I1205 14:00:29.558780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerStarted","Data":"b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2"} Dec 05 14:00:29 crc kubenswrapper[4858]: I1205 14:00:29.560152 4858 generic.go:334] "Generic (PLEG): container finished" podID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerID="97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223" exitCode=0 Dec 05 14:00:29 crc kubenswrapper[4858]: I1205 14:00:29.560223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d27qp" event={"ID":"825a6e39-523e-4040-bee6-14b3ed5d2000","Type":"ContainerDied","Data":"97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223"} Dec 05 14:00:29 crc kubenswrapper[4858]: I1205 14:00:29.566408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerStarted","Data":"40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9"} Dec 05 14:00:29 crc kubenswrapper[4858]: I1205 14:00:29.637029 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 14:00:30 crc kubenswrapper[4858]: I1205 14:00:30.585116 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cpxg" event={"ID":"e65f2d84-01e5-440d-b92c-79227561f3c0","Type":"ContainerStarted","Data":"01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e"} Dec 05 14:00:30 crc kubenswrapper[4858]: I1205 14:00:30.591429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fm4gw" event={"ID":"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c","Type":"ContainerStarted","Data":"6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8"} Dec 05 14:00:30 crc kubenswrapper[4858]: I1205 14:00:30.592804 4858 generic.go:334] "Generic (PLEG): container finished" podID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerID="40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9" exitCode=0 Dec 05 14:00:30 crc kubenswrapper[4858]: I1205 14:00:30.592862 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerDied","Data":"40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9"} Dec 05 14:00:30 crc kubenswrapper[4858]: I1205 14:00:30.595755 4858 generic.go:334] "Generic (PLEG): container finished" podID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerID="b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2" exitCode=0 Dec 05 14:00:30 crc kubenswrapper[4858]: I1205 14:00:30.596459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerDied","Data":"b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2"} Dec 05 14:00:31 crc kubenswrapper[4858]: I1205 14:00:31.627280 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fm4gw" podStartSLOduration=7.222078932 podStartE2EDuration="1m21.627254904s" podCreationTimestamp="2025-12-05 13:59:10 +0000 UTC" firstStartedPulling="2025-12-05 13:59:14.626699293 +0000 UTC m=+163.174297432" lastFinishedPulling="2025-12-05 14:00:29.031875265 +0000 UTC m=+237.579473404" observedRunningTime="2025-12-05 14:00:31.623781966 +0000 UTC m=+240.171380125" watchObservedRunningTime="2025-12-05 14:00:31.627254904 +0000 UTC m=+240.174853043" Dec 05 14:00:31 crc kubenswrapper[4858]: I1205 14:00:31.653943 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6cpxg" podStartSLOduration=8.17960349 podStartE2EDuration="1m23.653922122s" podCreationTimestamp="2025-12-05 13:59:08 +0000 UTC" firstStartedPulling="2025-12-05 13:59:13.543745106 +0000 UTC m=+162.091343245" lastFinishedPulling="2025-12-05 14:00:29.018063738 +0000 UTC m=+237.565661877" observedRunningTime="2025-12-05 14:00:31.650482195 +0000 UTC m=+240.198080334" watchObservedRunningTime="2025-12-05 14:00:31.653922122 +0000 UTC m=+240.201520261" Dec 05 14:00:31 crc kubenswrapper[4858]: I1205 14:00:31.681196 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 14:00:31 crc kubenswrapper[4858]: I1205 14:00:31.681351 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 14:00:32 crc kubenswrapper[4858]: I1205 14:00:32.723298 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fm4gw" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="registry-server" probeResult="failure" output=< Dec 05 14:00:32 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:00:32 crc kubenswrapper[4858]: > Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.330592 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4nzbm"] Dec 05 14:00:38 crc kubenswrapper[4858]: E1205 14:00:38.332382 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5b8ed5-1641-4428-8fff-05deab84fe14" containerName="collect-profiles" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.332481 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5b8ed5-1641-4428-8fff-05deab84fe14" containerName="collect-profiles" Dec 05 14:00:38 crc kubenswrapper[4858]: E1205 14:00:38.332559 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7bdc07-1638-401a-8307-bd51882dc651" containerName="pruner" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.332637 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7bdc07-1638-401a-8307-bd51882dc651" containerName="pruner" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.332854 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5b8ed5-1641-4428-8fff-05deab84fe14" containerName="collect-profiles" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.332950 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7bdc07-1638-401a-8307-bd51882dc651" containerName="pruner" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.333474 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.349799 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4nzbm"] Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.438804 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5f0906b-baba-4d0c-9303-aaa807285c76-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439120 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlm4\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-kube-api-access-zjlm4\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-bound-sa-token\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5f0906b-baba-4d0c-9303-aaa807285c76-registry-certificates\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439245 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-registry-tls\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5f0906b-baba-4d0c-9303-aaa807285c76-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.439303 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f0906b-baba-4d0c-9303-aaa807285c76-trusted-ca\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.471138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.540481 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5f0906b-baba-4d0c-9303-aaa807285c76-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.540793 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f0906b-baba-4d0c-9303-aaa807285c76-trusted-ca\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.540965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5f0906b-baba-4d0c-9303-aaa807285c76-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.541076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlm4\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-kube-api-access-zjlm4\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.541181 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-bound-sa-token\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.541263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5f0906b-baba-4d0c-9303-aaa807285c76-registry-certificates\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.541354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-registry-tls\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.541005 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5f0906b-baba-4d0c-9303-aaa807285c76-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.542586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f0906b-baba-4d0c-9303-aaa807285c76-trusted-ca\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.542845 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5f0906b-baba-4d0c-9303-aaa807285c76-registry-certificates\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.547776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5f0906b-baba-4d0c-9303-aaa807285c76-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.548232 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-registry-tls\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.560109 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlm4\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-kube-api-access-zjlm4\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.560448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f0906b-baba-4d0c-9303-aaa807285c76-bound-sa-token\") pod \"image-registry-66df7c8f76-4nzbm\" (UID: \"b5f0906b-baba-4d0c-9303-aaa807285c76\") " pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.654323 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.832535 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.832574 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 14:00:38 crc kubenswrapper[4858]: I1205 14:00:38.881015 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 14:00:39 crc kubenswrapper[4858]: I1205 14:00:39.677689 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 14:00:41 crc kubenswrapper[4858]: I1205 14:00:41.731082 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 14:00:41 crc kubenswrapper[4858]: I1205 14:00:41.783297 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 14:00:45 crc kubenswrapper[4858]: I1205 14:00:45.263551 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4nzbm"] Dec 05 14:00:45 crc kubenswrapper[4858]: I1205 14:00:45.645608 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4zztz"] Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.689908 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d27qp" event={"ID":"825a6e39-523e-4040-bee6-14b3ed5d2000","Type":"ContainerStarted","Data":"ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.692906 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" event={"ID":"b5f0906b-baba-4d0c-9303-aaa807285c76","Type":"ContainerStarted","Data":"01419629f7754ff7094cec8d07f7074ecd76075e911e55694cbbe7d8efa99e3a"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.692950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" event={"ID":"b5f0906b-baba-4d0c-9303-aaa807285c76","Type":"ContainerStarted","Data":"a49763074f79855cefbc1d51fca137d3a96005950e3c9ffe033889255d8869e9"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.693124 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.704411 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerStarted","Data":"c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.709191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-md2f2" event={"ID":"14af5b55-95bb-4d81-a390-3cbdc232f270","Type":"ContainerStarted","Data":"b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.711237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerStarted","Data":"a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.713036 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d27qp" podStartSLOduration=12.736356799 podStartE2EDuration="1m38.713016168s" podCreationTimestamp="2025-12-05 13:59:08 +0000 UTC" firstStartedPulling="2025-12-05 13:59:13.372223782 +0000 UTC m=+161.919821921" lastFinishedPulling="2025-12-05 14:00:39.348883161 +0000 UTC m=+247.896481290" observedRunningTime="2025-12-05 14:00:46.708356397 +0000 UTC m=+255.255954546" watchObservedRunningTime="2025-12-05 14:00:46.713016168 +0000 UTC m=+255.260614307" Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.725148 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9z5pd" event={"ID":"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b","Type":"ContainerStarted","Data":"f506cfbab4a80ef478e9a8f41de4dda2ff76dd9d046824180911baa90cb68783"} Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.746808 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j9hq9" podStartSLOduration=3.961458643 podStartE2EDuration="1m35.746791565s" podCreationTimestamp="2025-12-05 13:59:11 +0000 UTC" firstStartedPulling="2025-12-05 13:59:14.574083547 +0000 UTC m=+163.121681686" lastFinishedPulling="2025-12-05 14:00:46.359416469 +0000 UTC m=+254.907014608" observedRunningTime="2025-12-05 14:00:46.745931621 +0000 UTC m=+255.293529760" watchObservedRunningTime="2025-12-05 14:00:46.746791565 +0000 UTC m=+255.294389704" Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.793462 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" podStartSLOduration=8.793442855 podStartE2EDuration="8.793442855s" podCreationTimestamp="2025-12-05 14:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:00:46.790680507 +0000 UTC m=+255.338278646" watchObservedRunningTime="2025-12-05 14:00:46.793442855 +0000 UTC m=+255.341040994" Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.817771 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dst87" podStartSLOduration=9.176044761 podStartE2EDuration="1m35.817752546s" podCreationTimestamp="2025-12-05 13:59:11 +0000 UTC" firstStartedPulling="2025-12-05 13:59:14.607507656 +0000 UTC m=+163.155105795" lastFinishedPulling="2025-12-05 14:00:41.249215441 +0000 UTC m=+249.796813580" observedRunningTime="2025-12-05 14:00:46.816378188 +0000 UTC m=+255.363976337" watchObservedRunningTime="2025-12-05 14:00:46.817752546 +0000 UTC m=+255.365350685" Dec 05 14:00:46 crc kubenswrapper[4858]: I1205 14:00:46.836933 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-md2f2" podStartSLOduration=14.319261833 podStartE2EDuration="1m35.836919634s" podCreationTimestamp="2025-12-05 13:59:11 +0000 UTC" firstStartedPulling="2025-12-05 13:59:14.594499589 +0000 UTC m=+163.142097728" lastFinishedPulling="2025-12-05 14:00:36.11215739 +0000 UTC m=+244.659755529" observedRunningTime="2025-12-05 14:00:46.836501082 +0000 UTC m=+255.384099221" watchObservedRunningTime="2025-12-05 14:00:46.836919634 +0000 UTC m=+255.384517763" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.578675 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2g9nd"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.579295 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2g9nd" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="registry-server" containerID="cri-o://f589952472653e0680c07718c678f3e4e1668558210fb9c178fc324deb13b3f9" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.589513 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d27qp"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.599920 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6cpxg"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.600326 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6cpxg" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="registry-server" containerID="cri-o://01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.608234 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9z5pd"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.615718 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9qgzs"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.616132 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" containerID="cri-o://27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.629738 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fm4gw"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.630212 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fm4gw" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="registry-server" containerID="cri-o://6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.653146 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-md2f2"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.657058 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4fptm"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.658056 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.662140 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dst87"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.675562 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9hq9"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.682805 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4fptm"] Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.739148 4858 generic.go:334] "Generic (PLEG): container finished" podID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerID="f506cfbab4a80ef478e9a8f41de4dda2ff76dd9d046824180911baa90cb68783" exitCode=0 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.739214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9z5pd" event={"ID":"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b","Type":"ContainerDied","Data":"f506cfbab4a80ef478e9a8f41de4dda2ff76dd9d046824180911baa90cb68783"} Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.756048 4858 generic.go:334] "Generic (PLEG): container finished" podID="3175524c-136d-44a0-9324-0d063376c05f" containerID="f589952472653e0680c07718c678f3e4e1668558210fb9c178fc324deb13b3f9" exitCode=0 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.756292 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-md2f2" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="registry-server" containerID="cri-o://b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.756602 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerDied","Data":"f589952472653e0680c07718c678f3e4e1668558210fb9c178fc324deb13b3f9"} Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.757397 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j9hq9" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="registry-server" containerID="cri-o://c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.757625 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dst87" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="registry-server" containerID="cri-o://a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.757795 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d27qp" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="registry-server" containerID="cri-o://ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8" gracePeriod=30 Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.785139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff2db84d-03a9-4c8e-9584-aeafa84ead17-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.785213 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff2db84d-03a9-4c8e-9584-aeafa84ead17-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.785288 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjjjp\" (UniqueName: \"kubernetes.io/projected/ff2db84d-03a9-4c8e-9584-aeafa84ead17-kube-api-access-rjjjp\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.885963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjjjp\" (UniqueName: \"kubernetes.io/projected/ff2db84d-03a9-4c8e-9584-aeafa84ead17-kube-api-access-rjjjp\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.886029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff2db84d-03a9-4c8e-9584-aeafa84ead17-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.886123 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff2db84d-03a9-4c8e-9584-aeafa84ead17-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.887905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff2db84d-03a9-4c8e-9584-aeafa84ead17-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.896920 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff2db84d-03a9-4c8e-9584-aeafa84ead17-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:47 crc kubenswrapper[4858]: I1205 14:00:47.913711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjjjp\" (UniqueName: \"kubernetes.io/projected/ff2db84d-03a9-4c8e-9584-aeafa84ead17-kube-api-access-rjjjp\") pod \"marketplace-operator-79b997595-4fptm\" (UID: \"ff2db84d-03a9-4c8e-9584-aeafa84ead17\") " pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.047755 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.134473 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.191231 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcn8b\" (UniqueName: \"kubernetes.io/projected/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-kube-api-access-vcn8b\") pod \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.191337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-operator-metrics\") pod \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.191415 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-trusted-ca\") pod \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\" (UID: \"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.192578 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" (UID: "b53086e2-584f-48c4-aaf9-dba8e0ebe5ee"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.203095 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" (UID: "b53086e2-584f-48c4-aaf9-dba8e0ebe5ee"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.203898 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-kube-api-access-vcn8b" (OuterVolumeSpecName: "kube-api-access-vcn8b") pod "b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" (UID: "b53086e2-584f-48c4-aaf9-dba8e0ebe5ee"). InnerVolumeSpecName "kube-api-access-vcn8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.252167 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.293164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/e65f2d84-01e5-440d-b92c-79227561f3c0-kube-api-access-9vkc6\") pod \"e65f2d84-01e5-440d-b92c-79227561f3c0\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.293247 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-utilities\") pod \"e65f2d84-01e5-440d-b92c-79227561f3c0\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.293277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-catalog-content\") pod \"e65f2d84-01e5-440d-b92c-79227561f3c0\" (UID: \"e65f2d84-01e5-440d-b92c-79227561f3c0\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.293582 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcn8b\" (UniqueName: \"kubernetes.io/projected/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-kube-api-access-vcn8b\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.293606 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.293619 4858 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.299608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-utilities" (OuterVolumeSpecName: "utilities") pod "e65f2d84-01e5-440d-b92c-79227561f3c0" (UID: "e65f2d84-01e5-440d-b92c-79227561f3c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.311934 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65f2d84-01e5-440d-b92c-79227561f3c0-kube-api-access-9vkc6" (OuterVolumeSpecName: "kube-api-access-9vkc6") pod "e65f2d84-01e5-440d-b92c-79227561f3c0" (UID: "e65f2d84-01e5-440d-b92c-79227561f3c0"). InnerVolumeSpecName "kube-api-access-9vkc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.394533 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.394569 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vkc6\" (UniqueName: \"kubernetes.io/projected/e65f2d84-01e5-440d-b92c-79227561f3c0-kube-api-access-9vkc6\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.398738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e65f2d84-01e5-440d-b92c-79227561f3c0" (UID: "e65f2d84-01e5-440d-b92c-79227561f3c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.477536 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hq9_792eaec2-2c9f-487c-ab4b-437fa7897bee/registry-server/0.log" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.478194 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.495798 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65f2d84-01e5-440d-b92c-79227561f3c0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.518965 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dst87_a39fea16-b688-40d4-8077-1bbd6d653cf4/registry-server/0.log" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.519572 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.521531 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dst87" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.521686 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.524004 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9z5pd" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596253 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-catalog-content\") pod \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqvmw\" (UniqueName: \"kubernetes.io/projected/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-kube-api-access-cqvmw\") pod \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596313 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzzf6\" (UniqueName: \"kubernetes.io/projected/a39fea16-b688-40d4-8077-1bbd6d653cf4-kube-api-access-wzzf6\") pod \"a39fea16-b688-40d4-8077-1bbd6d653cf4\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-catalog-content\") pod \"3175524c-136d-44a0-9324-0d063376c05f\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596374 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-catalog-content\") pod \"792eaec2-2c9f-487c-ab4b-437fa7897bee\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596398 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwpm\" (UniqueName: \"kubernetes.io/projected/3175524c-136d-44a0-9324-0d063376c05f-kube-api-access-sdwpm\") pod \"3175524c-136d-44a0-9324-0d063376c05f\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596416 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqx6p\" (UniqueName: \"kubernetes.io/projected/792eaec2-2c9f-487c-ab4b-437fa7897bee-kube-api-access-kqx6p\") pod \"792eaec2-2c9f-487c-ab4b-437fa7897bee\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596445 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-catalog-content\") pod \"a39fea16-b688-40d4-8077-1bbd6d653cf4\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596463 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-utilities\") pod \"a39fea16-b688-40d4-8077-1bbd6d653cf4\" (UID: \"a39fea16-b688-40d4-8077-1bbd6d653cf4\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596510 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p24sn\" (UniqueName: \"kubernetes.io/projected/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-kube-api-access-p24sn\") pod \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596529 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-catalog-content\") pod \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596547 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-utilities\") pod \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\" (UID: \"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596575 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-utilities\") pod \"792eaec2-2c9f-487c-ab4b-437fa7897bee\" (UID: \"792eaec2-2c9f-487c-ab4b-437fa7897bee\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596599 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-utilities\") pod \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\" (UID: \"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.596616 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-utilities\") pod \"3175524c-136d-44a0-9324-0d063376c05f\" (UID: \"3175524c-136d-44a0-9324-0d063376c05f\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.598658 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-utilities" (OuterVolumeSpecName: "utilities") pod "792eaec2-2c9f-487c-ab4b-437fa7897bee" (UID: "792eaec2-2c9f-487c-ab4b-437fa7897bee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.601687 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d27qp_825a6e39-523e-4040-bee6-14b3ed5d2000/registry-server/0.log" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.601811 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-utilities" (OuterVolumeSpecName: "utilities") pod "d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" (UID: "d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.602009 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-utilities" (OuterVolumeSpecName: "utilities") pod "a39fea16-b688-40d4-8077-1bbd6d653cf4" (UID: "a39fea16-b688-40d4-8077-1bbd6d653cf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.602260 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d27qp" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.604641 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3175524c-136d-44a0-9324-0d063376c05f-kube-api-access-sdwpm" (OuterVolumeSpecName: "kube-api-access-sdwpm") pod "3175524c-136d-44a0-9324-0d063376c05f" (UID: "3175524c-136d-44a0-9324-0d063376c05f"). InnerVolumeSpecName "kube-api-access-sdwpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.606525 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a39fea16-b688-40d4-8077-1bbd6d653cf4-kube-api-access-wzzf6" (OuterVolumeSpecName: "kube-api-access-wzzf6") pod "a39fea16-b688-40d4-8077-1bbd6d653cf4" (UID: "a39fea16-b688-40d4-8077-1bbd6d653cf4"). InnerVolumeSpecName "kube-api-access-wzzf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.607761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-kube-api-access-cqvmw" (OuterVolumeSpecName: "kube-api-access-cqvmw") pod "d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" (UID: "d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b"). InnerVolumeSpecName "kube-api-access-cqvmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.613011 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-kube-api-access-p24sn" (OuterVolumeSpecName: "kube-api-access-p24sn") pod "02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" (UID: "02abc0e5-f9e1-41de-bb1c-40bd94b29f1c"). InnerVolumeSpecName "kube-api-access-p24sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.613258 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792eaec2-2c9f-487c-ab4b-437fa7897bee-kube-api-access-kqx6p" (OuterVolumeSpecName: "kube-api-access-kqx6p") pod "792eaec2-2c9f-487c-ab4b-437fa7897bee" (UID: "792eaec2-2c9f-487c-ab4b-437fa7897bee"). InnerVolumeSpecName "kube-api-access-kqx6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.614841 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-utilities" (OuterVolumeSpecName: "utilities") pod "3175524c-136d-44a0-9324-0d063376c05f" (UID: "3175524c-136d-44a0-9324-0d063376c05f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.651243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-utilities" (OuterVolumeSpecName: "utilities") pod "02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" (UID: "02abc0e5-f9e1-41de-bb1c-40bd94b29f1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.652162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" (UID: "02abc0e5-f9e1-41de-bb1c-40bd94b29f1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699228 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-utilities\") pod \"825a6e39-523e-4040-bee6-14b3ed5d2000\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699415 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dfr8\" (UniqueName: \"kubernetes.io/projected/825a6e39-523e-4040-bee6-14b3ed5d2000-kube-api-access-4dfr8\") pod \"825a6e39-523e-4040-bee6-14b3ed5d2000\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699457 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-catalog-content\") pod \"825a6e39-523e-4040-bee6-14b3ed5d2000\" (UID: \"825a6e39-523e-4040-bee6-14b3ed5d2000\") " Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699748 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p24sn\" (UniqueName: \"kubernetes.io/projected/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-kube-api-access-p24sn\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699767 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699781 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699792 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699804 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699814 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699843 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqvmw\" (UniqueName: \"kubernetes.io/projected/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-kube-api-access-cqvmw\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699857 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzzf6\" (UniqueName: \"kubernetes.io/projected/a39fea16-b688-40d4-8077-1bbd6d653cf4-kube-api-access-wzzf6\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699866 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwpm\" (UniqueName: \"kubernetes.io/projected/3175524c-136d-44a0-9324-0d063376c05f-kube-api-access-sdwpm\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699874 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqx6p\" (UniqueName: \"kubernetes.io/projected/792eaec2-2c9f-487c-ab4b-437fa7897bee-kube-api-access-kqx6p\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.699882 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.706706 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825a6e39-523e-4040-bee6-14b3ed5d2000-kube-api-access-4dfr8" (OuterVolumeSpecName: "kube-api-access-4dfr8") pod "825a6e39-523e-4040-bee6-14b3ed5d2000" (UID: "825a6e39-523e-4040-bee6-14b3ed5d2000"). InnerVolumeSpecName "kube-api-access-4dfr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.716663 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-utilities" (OuterVolumeSpecName: "utilities") pod "825a6e39-523e-4040-bee6-14b3ed5d2000" (UID: "825a6e39-523e-4040-bee6-14b3ed5d2000"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.747183 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4fptm"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.758229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3175524c-136d-44a0-9324-0d063376c05f" (UID: "3175524c-136d-44a0-9324-0d063376c05f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.770161 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j9hq9_792eaec2-2c9f-487c-ab4b-437fa7897bee/registry-server/0.log" Dec 05 14:00:48 crc kubenswrapper[4858]: W1205 14:00:48.771068 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff2db84d_03a9_4c8e_9584_aeafa84ead17.slice/crio-fb5aab3fe031876e2535476bbe93a84f754e7834c8f91bc1f93697320d68f227 WatchSource:0}: Error finding container fb5aab3fe031876e2535476bbe93a84f754e7834c8f91bc1f93697320d68f227: Status 404 returned error can't find the container with id fb5aab3fe031876e2535476bbe93a84f754e7834c8f91bc1f93697320d68f227 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.774265 4858 generic.go:334] "Generic (PLEG): container finished" podID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerID="c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5" exitCode=1 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.774512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerDied","Data":"c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.774787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j9hq9" event={"ID":"792eaec2-2c9f-487c-ab4b-437fa7897bee","Type":"ContainerDied","Data":"a74719a0dce15f69efd7a282fd33c7a42ded7a22e4abd49299fac28d26596c2f"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.774914 4858 scope.go:117] "RemoveContainer" containerID="c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.775512 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j9hq9" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.785516 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dst87_a39fea16-b688-40d4-8077-1bbd6d653cf4/registry-server/0.log" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.788896 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "792eaec2-2c9f-487c-ab4b-437fa7897bee" (UID: "792eaec2-2c9f-487c-ab4b-437fa7897bee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.789038 4858 generic.go:334] "Generic (PLEG): container finished" podID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerID="a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21" exitCode=1 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.789094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerDied","Data":"a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.789121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dst87" event={"ID":"a39fea16-b688-40d4-8077-1bbd6d653cf4","Type":"ContainerDied","Data":"7c77c23d96630b5af9b9c8f210a66b41d557e1cc828d1c5faa1b37ffbc8dff77"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.789197 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dst87" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.790409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" (UID: "d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.802568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9z5pd" event={"ID":"d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b","Type":"ContainerDied","Data":"8eeee7f2008efd2c7e0c091f4ba9cba1606c0aaa1c2d4ef98e5f41cdd1c3d4a4"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.802671 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9z5pd" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.808630 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3175524c-136d-44a0-9324-0d063376c05f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.809110 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792eaec2-2c9f-487c-ab4b-437fa7897bee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.809181 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.810062 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.810246 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dfr8\" (UniqueName: \"kubernetes.io/projected/825a6e39-523e-4040-bee6-14b3ed5d2000-kube-api-access-4dfr8\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.810528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cpxg" event={"ID":"e65f2d84-01e5-440d-b92c-79227561f3c0","Type":"ContainerDied","Data":"01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.810504 4858 generic.go:334] "Generic (PLEG): container finished" podID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerID="01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e" exitCode=0 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.810592 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6cpxg" event={"ID":"e65f2d84-01e5-440d-b92c-79227561f3c0","Type":"ContainerDied","Data":"4d2669b2ffb24a7eaa5f5833534c276a103fbc0bfe1789f33358d957bb1d2131"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.810680 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6cpxg" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.818298 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-d27qp_825a6e39-523e-4040-bee6-14b3ed5d2000/registry-server/0.log" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.821002 4858 generic.go:334] "Generic (PLEG): container finished" podID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerID="ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8" exitCode=1 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.821048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d27qp" event={"ID":"825a6e39-523e-4040-bee6-14b3ed5d2000","Type":"ContainerDied","Data":"ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.821072 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d27qp" event={"ID":"825a6e39-523e-4040-bee6-14b3ed5d2000","Type":"ContainerDied","Data":"2d458bbc13a730a08a6c7e9830d234c955015a5ff4460c2b51906b61e6a1ed3d"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.821311 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d27qp" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.826207 4858 generic.go:334] "Generic (PLEG): container finished" podID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerID="27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b" exitCode=0 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.826269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" event={"ID":"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee","Type":"ContainerDied","Data":"27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.826297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" event={"ID":"b53086e2-584f-48c4-aaf9-dba8e0ebe5ee","Type":"ContainerDied","Data":"39b5ac7aa11971fa7ef839b316e9fb0f6918e7320da16f7cec0b76292dbfc3a7"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.826366 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9qgzs" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.840117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2g9nd" event={"ID":"3175524c-136d-44a0-9324-0d063376c05f","Type":"ContainerDied","Data":"9b7db6ea698937f4e9541dea2ebd492a79d5e0116b3d6ba953ac7439afd9a8c1"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.840228 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2g9nd" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.848591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "825a6e39-523e-4040-bee6-14b3ed5d2000" (UID: "825a6e39-523e-4040-bee6-14b3ed5d2000"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.855656 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6cpxg"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.858913 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a39fea16-b688-40d4-8077-1bbd6d653cf4" (UID: "a39fea16-b688-40d4-8077-1bbd6d653cf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.859035 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6cpxg"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.866027 4858 generic.go:334] "Generic (PLEG): container finished" podID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerID="6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8" exitCode=0 Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.866133 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fm4gw" event={"ID":"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c","Type":"ContainerDied","Data":"6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.866223 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fm4gw" event={"ID":"02abc0e5-f9e1-41de-bb1c-40bd94b29f1c","Type":"ContainerDied","Data":"feee159d093c938f6e5d732c08b9a1856618e45144e36e38fdc5a016f2f50469"} Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.866384 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fm4gw" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.891295 4858 scope.go:117] "RemoveContainer" containerID="40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.899284 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9z5pd"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.901537 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9z5pd"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.911597 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a39fea16-b688-40d4-8077-1bbd6d653cf4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.911624 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/825a6e39-523e-4040-bee6-14b3ed5d2000-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.916538 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2g9nd"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.918352 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2g9nd"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.930542 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9qgzs"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.934547 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9qgzs"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.949701 4858 scope.go:117] "RemoveContainer" containerID="6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.949839 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fm4gw"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.959744 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fm4gw"] Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.984769 4858 scope.go:117] "RemoveContainer" containerID="c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5" Dec 05 14:00:48 crc kubenswrapper[4858]: E1205 14:00:48.987143 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5\": container with ID starting with c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5 not found: ID does not exist" containerID="c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.987177 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5"} err="failed to get container status \"c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5\": rpc error: code = NotFound desc = could not find container \"c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5\": container with ID starting with c119f7e6beb057f463b2f8e5fe453f92d6d98513eae7eebd2b0d97aa133f24b5 not found: ID does not exist" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.987251 4858 scope.go:117] "RemoveContainer" containerID="40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9" Dec 05 14:00:48 crc kubenswrapper[4858]: E1205 14:00:48.987886 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9\": container with ID starting with 40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9 not found: ID does not exist" containerID="40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.987935 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9"} err="failed to get container status \"40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9\": rpc error: code = NotFound desc = could not find container \"40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9\": container with ID starting with 40dde36930e6d938c27fb77fcc30862f3d8c5009b0ee3db0cef0001c1c0deda9 not found: ID does not exist" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.987968 4858 scope.go:117] "RemoveContainer" containerID="6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310" Dec 05 14:00:48 crc kubenswrapper[4858]: E1205 14:00:48.988312 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310\": container with ID starting with 6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310 not found: ID does not exist" containerID="6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.988339 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310"} err="failed to get container status \"6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310\": rpc error: code = NotFound desc = could not find container \"6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310\": container with ID starting with 6e05965e6ac7df4a18e6909b00d519e55740120914db94b4c6f27ca0298cd310 not found: ID does not exist" Dec 05 14:00:48 crc kubenswrapper[4858]: I1205 14:00:48.988356 4858 scope.go:117] "RemoveContainer" containerID="a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.032195 4858 scope.go:117] "RemoveContainer" containerID="b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.056930 4858 scope.go:117] "RemoveContainer" containerID="7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.073487 4858 scope.go:117] "RemoveContainer" containerID="a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.074417 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21\": container with ID starting with a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21 not found: ID does not exist" containerID="a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.074452 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21"} err="failed to get container status \"a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21\": rpc error: code = NotFound desc = could not find container \"a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21\": container with ID starting with a489d834bbb882b555b76ba08d637411619c9f49bff0dece99f28b3789721e21 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.074475 4858 scope.go:117] "RemoveContainer" containerID="b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.074695 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2\": container with ID starting with b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2 not found: ID does not exist" containerID="b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.074713 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2"} err="failed to get container status \"b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2\": rpc error: code = NotFound desc = could not find container \"b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2\": container with ID starting with b88ca6f52f339e57db8a71fedc3b6d53c51be7fc56fa958cc6ca860907663ff2 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.074737 4858 scope.go:117] "RemoveContainer" containerID="7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.075078 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477\": container with ID starting with 7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477 not found: ID does not exist" containerID="7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.075095 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477"} err="failed to get container status \"7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477\": rpc error: code = NotFound desc = could not find container \"7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477\": container with ID starting with 7fab21ff074e34c2429c2426dc53c36b5089fb17637159b2f4600e89daba5477 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.075108 4858 scope.go:117] "RemoveContainer" containerID="f506cfbab4a80ef478e9a8f41de4dda2ff76dd9d046824180911baa90cb68783" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.114256 4858 scope.go:117] "RemoveContainer" containerID="fe6002cd1fb9790fe773cc78d90d919ff60f2e88e143c56c45bbeccf997ea5dc" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.124916 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j9hq9"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.129353 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j9hq9"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.135726 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.141577 4858 scope.go:117] "RemoveContainer" containerID="01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.142574 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dst87"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.147973 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dst87"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.166420 4858 scope.go:117] "RemoveContainer" containerID="59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.186715 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d27qp"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.197406 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d27qp"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.208087 4858 scope.go:117] "RemoveContainer" containerID="9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.222745 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rqxz\" (UniqueName: \"kubernetes.io/projected/14af5b55-95bb-4d81-a390-3cbdc232f270-kube-api-access-7rqxz\") pod \"14af5b55-95bb-4d81-a390-3cbdc232f270\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.222806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-utilities\") pod \"14af5b55-95bb-4d81-a390-3cbdc232f270\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.222860 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-catalog-content\") pod \"14af5b55-95bb-4d81-a390-3cbdc232f270\" (UID: \"14af5b55-95bb-4d81-a390-3cbdc232f270\") " Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.226649 4858 scope.go:117] "RemoveContainer" containerID="01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.226793 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14af5b55-95bb-4d81-a390-3cbdc232f270-kube-api-access-7rqxz" (OuterVolumeSpecName: "kube-api-access-7rqxz") pod "14af5b55-95bb-4d81-a390-3cbdc232f270" (UID: "14af5b55-95bb-4d81-a390-3cbdc232f270"). InnerVolumeSpecName "kube-api-access-7rqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.227349 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e\": container with ID starting with 01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e not found: ID does not exist" containerID="01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.227393 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e"} err="failed to get container status \"01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e\": rpc error: code = NotFound desc = could not find container \"01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e\": container with ID starting with 01fc0ff5ea3bfd5b4634851b2dc0842012164a8888519f84b494c1c5e188835e not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.227405 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-utilities" (OuterVolumeSpecName: "utilities") pod "14af5b55-95bb-4d81-a390-3cbdc232f270" (UID: "14af5b55-95bb-4d81-a390-3cbdc232f270"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.227417 4858 scope.go:117] "RemoveContainer" containerID="59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.227709 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528\": container with ID starting with 59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528 not found: ID does not exist" containerID="59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.227744 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528"} err="failed to get container status \"59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528\": rpc error: code = NotFound desc = could not find container \"59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528\": container with ID starting with 59e2fe482447485b4630b0ae70e350d392261133e704e995a2e06a88fac5d528 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.227769 4858 scope.go:117] "RemoveContainer" containerID="9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.228409 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3\": container with ID starting with 9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3 not found: ID does not exist" containerID="9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.228489 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3"} err="failed to get container status \"9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3\": rpc error: code = NotFound desc = could not find container \"9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3\": container with ID starting with 9711a13f6c124c87b81f8c56e4485df0a3dc5ede250fecbada05d876a9b364f3 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.228541 4858 scope.go:117] "RemoveContainer" containerID="ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.252259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14af5b55-95bb-4d81-a390-3cbdc232f270" (UID: "14af5b55-95bb-4d81-a390-3cbdc232f270"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.252284 4858 scope.go:117] "RemoveContainer" containerID="97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.273740 4858 scope.go:117] "RemoveContainer" containerID="c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.285417 4858 scope.go:117] "RemoveContainer" containerID="ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.285760 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8\": container with ID starting with ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8 not found: ID does not exist" containerID="ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.285786 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8"} err="failed to get container status \"ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8\": rpc error: code = NotFound desc = could not find container \"ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8\": container with ID starting with ef7ec3de03cb1fddbf2ea5308c0870af9ee2ea87ba0bd58b1a01e7e2047856b8 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.285806 4858 scope.go:117] "RemoveContainer" containerID="97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.286168 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223\": container with ID starting with 97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223 not found: ID does not exist" containerID="97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.286185 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223"} err="failed to get container status \"97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223\": rpc error: code = NotFound desc = could not find container \"97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223\": container with ID starting with 97cc12963c39fde0b908e6ada76a2a95e353be0828a4dbe6ee62dff8896b0223 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.286199 4858 scope.go:117] "RemoveContainer" containerID="c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.286511 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f\": container with ID starting with c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f not found: ID does not exist" containerID="c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.286532 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f"} err="failed to get container status \"c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f\": rpc error: code = NotFound desc = could not find container \"c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f\": container with ID starting with c9ac2e542f4dc7d8c7ad628fddc87d409417e05711e58ff90d1d177f515f7e0f not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.286549 4858 scope.go:117] "RemoveContainer" containerID="27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.304578 4858 scope.go:117] "RemoveContainer" containerID="27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.304938 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b\": container with ID starting with 27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b not found: ID does not exist" containerID="27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.304993 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b"} err="failed to get container status \"27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b\": rpc error: code = NotFound desc = could not find container \"27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b\": container with ID starting with 27c5ed4b8197803528014f6caa0ab1318223939a3cf7b7e5f817948c21d7400b not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.305021 4858 scope.go:117] "RemoveContainer" containerID="f589952472653e0680c07718c678f3e4e1668558210fb9c178fc324deb13b3f9" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.324330 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rqxz\" (UniqueName: \"kubernetes.io/projected/14af5b55-95bb-4d81-a390-3cbdc232f270-kube-api-access-7rqxz\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.324364 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.324376 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14af5b55-95bb-4d81-a390-3cbdc232f270-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.342228 4858 scope.go:117] "RemoveContainer" containerID="8f8f6535c6c206bab9191e690c4e65a134d1010052ca4b328d6a3ab8a24390b5" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.364720 4858 scope.go:117] "RemoveContainer" containerID="00ca6c075835fdccec329c95fa7bf48525735894c7ad4320d1ed40f3e216af43" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.383321 4858 scope.go:117] "RemoveContainer" containerID="6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.407666 4858 scope.go:117] "RemoveContainer" containerID="06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.426405 4858 scope.go:117] "RemoveContainer" containerID="e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.447142 4858 scope.go:117] "RemoveContainer" containerID="6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.448018 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8\": container with ID starting with 6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8 not found: ID does not exist" containerID="6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.448053 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8"} err="failed to get container status \"6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8\": rpc error: code = NotFound desc = could not find container \"6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8\": container with ID starting with 6a7220df84ac31d37dfb79d699541ef4b731c7fdde66f1996dacc2da2f8aa8e8 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.448076 4858 scope.go:117] "RemoveContainer" containerID="06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.448450 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d\": container with ID starting with 06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d not found: ID does not exist" containerID="06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.448475 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d"} err="failed to get container status \"06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d\": rpc error: code = NotFound desc = could not find container \"06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d\": container with ID starting with 06018548b1256aa3a7a822ecd41b9272a7c0319e831c0772d4068bee1c22bb1d not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.448489 4858 scope.go:117] "RemoveContainer" containerID="e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.448964 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4\": container with ID starting with e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4 not found: ID does not exist" containerID="e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.448984 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4"} err="failed to get container status \"e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4\": rpc error: code = NotFound desc = could not find container \"e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4\": container with ID starting with e7872d2920355a46e03671f684ecd6d1489fc49677646e20f0951b79aaa894a4 not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.874520 4858 generic.go:334] "Generic (PLEG): container finished" podID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerID="b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb" exitCode=0 Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.874556 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-md2f2" event={"ID":"14af5b55-95bb-4d81-a390-3cbdc232f270","Type":"ContainerDied","Data":"b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb"} Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.874849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-md2f2" event={"ID":"14af5b55-95bb-4d81-a390-3cbdc232f270","Type":"ContainerDied","Data":"679c7e436fa96de73325fac01061e617b285e87c241e5fe0c0aea94adf4d2337"} Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.874615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-md2f2" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.874897 4858 scope.go:117] "RemoveContainer" containerID="b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.879336 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" event={"ID":"ff2db84d-03a9-4c8e-9584-aeafa84ead17","Type":"ContainerStarted","Data":"d28d165b0b7bddf89957c7f840bda46f2752488e5c295169884323a7cf2274c1"} Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.879382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" event={"ID":"ff2db84d-03a9-4c8e-9584-aeafa84ead17","Type":"ContainerStarted","Data":"fb5aab3fe031876e2535476bbe93a84f754e7834c8f91bc1f93697320d68f227"} Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.879667 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.882110 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.893790 4858 scope.go:117] "RemoveContainer" containerID="e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.916008 4858 scope.go:117] "RemoveContainer" containerID="1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.922065 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" podStartSLOduration=2.9220268799999998 podStartE2EDuration="2.92202688s" podCreationTimestamp="2025-12-05 14:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:00:49.920979201 +0000 UTC m=+258.468577340" watchObservedRunningTime="2025-12-05 14:00:49.92202688 +0000 UTC m=+258.469625009" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.937237 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" path="/var/lib/kubelet/pods/02abc0e5-f9e1-41de-bb1c-40bd94b29f1c/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.939029 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3175524c-136d-44a0-9324-0d063376c05f" path="/var/lib/kubelet/pods/3175524c-136d-44a0-9324-0d063376c05f/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.939692 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" path="/var/lib/kubelet/pods/792eaec2-2c9f-487c-ab4b-437fa7897bee/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.940987 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" path="/var/lib/kubelet/pods/825a6e39-523e-4040-bee6-14b3ed5d2000/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.942502 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" path="/var/lib/kubelet/pods/a39fea16-b688-40d4-8077-1bbd6d653cf4/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.943543 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" path="/var/lib/kubelet/pods/b53086e2-584f-48c4-aaf9-dba8e0ebe5ee/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.944778 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" path="/var/lib/kubelet/pods/d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.945615 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" path="/var/lib/kubelet/pods/e65f2d84-01e5-440d-b92c-79227561f3c0/volumes" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.946681 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-md2f2"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.946768 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-md2f2"] Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.964228 4858 scope.go:117] "RemoveContainer" containerID="b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.972544 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb\": container with ID starting with b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb not found: ID does not exist" containerID="b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.972599 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb"} err="failed to get container status \"b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb\": rpc error: code = NotFound desc = could not find container \"b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb\": container with ID starting with b10432ab99e320bca1886cbb8e5b6f1237d629d0149e9581f74b743e58cbefdb not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.972631 4858 scope.go:117] "RemoveContainer" containerID="e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.973001 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe\": container with ID starting with e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe not found: ID does not exist" containerID="e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.973028 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe"} err="failed to get container status \"e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe\": rpc error: code = NotFound desc = could not find container \"e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe\": container with ID starting with e32ef19b972cee1803cc88f255b2e7ec728e7afa7b5a27e636eaa310b0daaafe not found: ID does not exist" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.973119 4858 scope.go:117] "RemoveContainer" containerID="1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64" Dec 05 14:00:49 crc kubenswrapper[4858]: E1205 14:00:49.973535 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64\": container with ID starting with 1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64 not found: ID does not exist" containerID="1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64" Dec 05 14:00:49 crc kubenswrapper[4858]: I1205 14:00:49.973556 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64"} err="failed to get container status \"1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64\": rpc error: code = NotFound desc = could not find container \"1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64\": container with ID starting with 1b1117017872ef06bb079c574416380f7b846defafc6acc3937d7c0eba797d64 not found: ID does not exist" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948288 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhrc4"] Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948470 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948480 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948487 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948493 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948502 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948508 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948517 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948522 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948532 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948538 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948546 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948551 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948558 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948565 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948576 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948585 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948594 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948600 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948609 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948614 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948621 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948627 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948637 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948643 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948652 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948657 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948665 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948670 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948677 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948682 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948689 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948695 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948704 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948709 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948718 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948723 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948732 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948738 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948746 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948752 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948759 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948765 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948773 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948779 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948786 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948793 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerName="extract-utilities" Dec 05 14:00:50 crc kubenswrapper[4858]: E1205 14:00:50.948799 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948805 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948907 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3175524c-136d-44a0-9324-0d063376c05f" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948916 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948926 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53086e2-584f-48c4-aaf9-dba8e0ebe5ee" containerName="marketplace-operator" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948933 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="825a6e39-523e-4040-bee6-14b3ed5d2000" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948940 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a39fea16-b688-40d4-8077-1bbd6d653cf4" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948948 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="792eaec2-2c9f-487c-ab4b-437fa7897bee" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948955 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65f2d84-01e5-440d-b92c-79227561f3c0" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948962 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="02abc0e5-f9e1-41de-bb1c-40bd94b29f1c" containerName="registry-server" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.948970 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4fa1ddc-6147-4c40-9e8c-8a7527bdaf0b" containerName="extract-content" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.950688 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.957409 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 14:00:50 crc kubenswrapper[4858]: I1205 14:00:50.957704 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhrc4"] Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.052196 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4sc\" (UniqueName: \"kubernetes.io/projected/67328f86-d148-42b9-b5e0-29d1aa422b03-kube-api-access-gs4sc\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.052255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-catalog-content\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.052369 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-utilities\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.147849 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4n4r2"] Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.149143 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.151093 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.154465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-utilities\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.154586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs4sc\" (UniqueName: \"kubernetes.io/projected/67328f86-d148-42b9-b5e0-29d1aa422b03-kube-api-access-gs4sc\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.154895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-catalog-content\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.155172 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-utilities\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.155362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-catalog-content\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.158457 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4n4r2"] Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.179802 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs4sc\" (UniqueName: \"kubernetes.io/projected/67328f86-d148-42b9-b5e0-29d1aa422b03-kube-api-access-gs4sc\") pod \"community-operators-mhrc4\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.255908 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-catalog-content\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.256232 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-utilities\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.256289 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8pd8\" (UniqueName: \"kubernetes.io/projected/cb1143a5-8f39-460c-9d9c-121a877118b9-kube-api-access-r8pd8\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.273794 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.357650 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-catalog-content\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.357704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-utilities\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.357758 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8pd8\" (UniqueName: \"kubernetes.io/projected/cb1143a5-8f39-460c-9d9c-121a877118b9-kube-api-access-r8pd8\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.358753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-catalog-content\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.359031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-utilities\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.388610 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8pd8\" (UniqueName: \"kubernetes.io/projected/cb1143a5-8f39-460c-9d9c-121a877118b9-kube-api-access-r8pd8\") pod \"certified-operators-4n4r2\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.466779 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.673671 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhrc4"] Dec 05 14:00:51 crc kubenswrapper[4858]: W1205 14:00:51.686514 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67328f86_d148_42b9_b5e0_29d1aa422b03.slice/crio-97a3c76ab2591979031e192ae789e83de090ee3915403a48dcd27f4e64a5ec95 WatchSource:0}: Error finding container 97a3c76ab2591979031e192ae789e83de090ee3915403a48dcd27f4e64a5ec95: Status 404 returned error can't find the container with id 97a3c76ab2591979031e192ae789e83de090ee3915403a48dcd27f4e64a5ec95 Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.862792 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4n4r2"] Dec 05 14:00:51 crc kubenswrapper[4858]: W1205 14:00:51.867971 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb1143a5_8f39_460c_9d9c_121a877118b9.slice/crio-9ff0010cfac3937df04fe4d2dc799ce3b32a61362ccee241d452d0795bfa58de WatchSource:0}: Error finding container 9ff0010cfac3937df04fe4d2dc799ce3b32a61362ccee241d452d0795bfa58de: Status 404 returned error can't find the container with id 9ff0010cfac3937df04fe4d2dc799ce3b32a61362ccee241d452d0795bfa58de Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.914926 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14af5b55-95bb-4d81-a390-3cbdc232f270" path="/var/lib/kubelet/pods/14af5b55-95bb-4d81-a390-3cbdc232f270/volumes" Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.916847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerStarted","Data":"9ff0010cfac3937df04fe4d2dc799ce3b32a61362ccee241d452d0795bfa58de"} Dec 05 14:00:51 crc kubenswrapper[4858]: I1205 14:00:51.918094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerStarted","Data":"97a3c76ab2591979031e192ae789e83de090ee3915403a48dcd27f4e64a5ec95"} Dec 05 14:00:52 crc kubenswrapper[4858]: I1205 14:00:52.929652 4858 generic.go:334] "Generic (PLEG): container finished" podID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerID="b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e" exitCode=0 Dec 05 14:00:52 crc kubenswrapper[4858]: I1205 14:00:52.929784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerDied","Data":"b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e"} Dec 05 14:00:52 crc kubenswrapper[4858]: I1205 14:00:52.934449 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerID="3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b" exitCode=0 Dec 05 14:00:52 crc kubenswrapper[4858]: I1205 14:00:52.934534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerDied","Data":"3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b"} Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.360424 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2hzq"] Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.361631 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.364248 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.383148 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jtlk\" (UniqueName: \"kubernetes.io/projected/461fbf64-d6a9-4371-a580-1d832c1a8a29-kube-api-access-9jtlk\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.383487 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461fbf64-d6a9-4371-a580-1d832c1a8a29-catalog-content\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.383612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461fbf64-d6a9-4371-a580-1d832c1a8a29-utilities\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.400339 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2hzq"] Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.485404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461fbf64-d6a9-4371-a580-1d832c1a8a29-catalog-content\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.485445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461fbf64-d6a9-4371-a580-1d832c1a8a29-utilities\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.485516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jtlk\" (UniqueName: \"kubernetes.io/projected/461fbf64-d6a9-4371-a580-1d832c1a8a29-kube-api-access-9jtlk\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.486164 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/461fbf64-d6a9-4371-a580-1d832c1a8a29-catalog-content\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.486173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/461fbf64-d6a9-4371-a580-1d832c1a8a29-utilities\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.532553 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jtlk\" (UniqueName: \"kubernetes.io/projected/461fbf64-d6a9-4371-a580-1d832c1a8a29-kube-api-access-9jtlk\") pod \"redhat-operators-k2hzq\" (UID: \"461fbf64-d6a9-4371-a580-1d832c1a8a29\") " pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.549082 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9fbw6"] Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.553196 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.563004 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.569954 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fbw6"] Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.586205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-catalog-content\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.586261 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-utilities\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.586287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8gls\" (UniqueName: \"kubernetes.io/projected/9bdceab9-085a-485f-87c3-54a30f6a4b01-kube-api-access-w8gls\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.676357 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.688366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-catalog-content\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.688434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-utilities\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.688461 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8gls\" (UniqueName: \"kubernetes.io/projected/9bdceab9-085a-485f-87c3-54a30f6a4b01-kube-api-access-w8gls\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.688959 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-catalog-content\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.688971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-utilities\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.711760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8gls\" (UniqueName: \"kubernetes.io/projected/9bdceab9-085a-485f-87c3-54a30f6a4b01-kube-api-access-w8gls\") pod \"redhat-marketplace-9fbw6\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.885127 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.941301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerStarted","Data":"0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384"} Dec 05 14:00:53 crc kubenswrapper[4858]: I1205 14:00:53.946554 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerStarted","Data":"0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.030444 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.031185 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.032750 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.032793 4858 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.032997 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033011 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.033022 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033029 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.033038 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033043 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.033052 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033057 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.033067 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033072 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.033080 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033086 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.033095 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033101 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033199 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033211 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033222 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033232 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033241 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033458 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033520 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979" gracePeriod=15 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033657 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3" gracePeriod=15 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033703 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7" gracePeriod=15 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033733 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489" gracePeriod=15 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.033758 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695" gracePeriod=15 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.098943 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.098996 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.099025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.099048 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.099115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.099140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.099158 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.099183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.101819 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.141355 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2hzq"] Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.151447 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-k2hzq.187e5689e05d74e3 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-k2hzq,UID:461fbf64-d6a9-4371-a580-1d832c1a8a29,APIVersion:v1,ResourceVersion:29567,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 14:00:54.150542563 +0000 UTC m=+262.698140702,LastTimestamp:2025-12-05 14:00:54.150542563 +0000 UTC m=+262.698140702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.200237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.200584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.200616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.202735 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204505 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204644 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.204753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.238477 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb1143a5_8f39_460c_9d9c_121a877118b9.slice/crio-conmon-0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67328f86_d148_42b9_b5e0_29d1aa422b03.slice/crio-0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb1143a5_8f39_460c_9d9c_121a877118b9.slice/crio-0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.386226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.409309 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:00:54 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a" Netns:"/var/run/netns/95ec48e5-e5a3-44a8-b72f-f007598824b7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:00:54 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:00:54 crc kubenswrapper[4858]: > Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.409671 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:00:54 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a" Netns:"/var/run/netns/95ec48e5-e5a3-44a8-b72f-f007598824b7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:00:54 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:00:54 crc kubenswrapper[4858]: > pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.409696 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 14:00:54 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a" Netns:"/var/run/netns/95ec48e5-e5a3-44a8-b72f-f007598824b7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:00:54 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:00:54 crc kubenswrapper[4858]: > pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:54 crc kubenswrapper[4858]: E1205 14:00:54.409762 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-9fbw6_openshift-marketplace(9bdceab9-085a-485f-87c3-54a30f6a4b01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-9fbw6_openshift-marketplace(9bdceab9-085a-485f-87c3-54a30f6a4b01)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a\\\" Netns:\\\"/var/run/netns/95ec48e5-e5a3-44a8-b72f-f007598824b7\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=83ca4e7c04b9dd48af55b8904a471a68df9e69ea986e37714a43b1ebf0e8e80a;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s\\\": dial tcp 38.102.83.174:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-9fbw6" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" Dec 05 14:00:54 crc kubenswrapper[4858]: W1205 14:00:54.424236 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ac7ada546e705cbb560c1a7190dd041aa1b7bc22e3cc0ed4a86eccfd133fc4eb WatchSource:0}: Error finding container ac7ada546e705cbb560c1a7190dd041aa1b7bc22e3cc0ed4a86eccfd133fc4eb: Status 404 returned error can't find the container with id ac7ada546e705cbb560c1a7190dd041aa1b7bc22e3cc0ed4a86eccfd133fc4eb Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.953292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b2ade3a7417fc889eb651ff30d52c812803d3bfe2784166954c9ade5da707cfc"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.953552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ac7ada546e705cbb560c1a7190dd041aa1b7bc22e3cc0ed4a86eccfd133fc4eb"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.954707 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.955020 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.956317 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerID="0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.956369 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerDied","Data":"0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.957093 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.957413 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.957749 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.960211 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.962895 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.963567 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.963586 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.963594 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.963601 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695" exitCode=2 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.963691 4858 scope.go:117] "RemoveContainer" containerID="ef07c23b53c8e43bfe5caa8b4a969ea3730ebd04d070b59a5a32a7901edd3729" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.966072 4858 generic.go:334] "Generic (PLEG): container finished" podID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerID="0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.966339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerDied","Data":"0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.966959 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.967261 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.967528 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.967726 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.968733 4858 generic.go:334] "Generic (PLEG): container finished" podID="df3eb38e-7204-4116-9870-a256348a5034" containerID="659d43f359c7bd659182c364d67154b583e89c0d044e8247227818230e78d2e5" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.968779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"df3eb38e-7204-4116-9870-a256348a5034","Type":"ContainerDied","Data":"659d43f359c7bd659182c364d67154b583e89c0d044e8247227818230e78d2e5"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.969120 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.969287 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.969439 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.969579 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.969718 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.970664 4858 generic.go:334] "Generic (PLEG): container finished" podID="461fbf64-d6a9-4371-a580-1d832c1a8a29" containerID="393d72a54eb84ffa79500da8dc30d860be1cc8fdf41693f3a877cd206e9ebc07" exitCode=0 Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.970707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.971118 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.971659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2hzq" event={"ID":"461fbf64-d6a9-4371-a580-1d832c1a8a29","Type":"ContainerDied","Data":"393d72a54eb84ffa79500da8dc30d860be1cc8fdf41693f3a877cd206e9ebc07"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.971678 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2hzq" event={"ID":"461fbf64-d6a9-4371-a580-1d832c1a8a29","Type":"ContainerStarted","Data":"c1f66e6a8fe1151a8436ea2524170e5b74edaf62885cf55c0bc6a3a119ad923d"} Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.971959 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.972175 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.972444 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.972616 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.972899 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:54 crc kubenswrapper[4858]: I1205 14:00:54.973242 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:55 crc kubenswrapper[4858]: I1205 14:00:55.388197 4858 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 05 14:00:55 crc kubenswrapper[4858]: I1205 14:00:55.388516 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 05 14:00:55 crc kubenswrapper[4858]: E1205 14:00:55.566465 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:00:55 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082" Netns:"/var/run/netns/6257005b-1394-4296-a1bc-11fea2e740a5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:00:55 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:00:55 crc kubenswrapper[4858]: > Dec 05 14:00:55 crc kubenswrapper[4858]: E1205 14:00:55.566536 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:00:55 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082" Netns:"/var/run/netns/6257005b-1394-4296-a1bc-11fea2e740a5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:00:55 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:00:55 crc kubenswrapper[4858]: > pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:55 crc kubenswrapper[4858]: E1205 14:00:55.566580 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 14:00:55 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082" Netns:"/var/run/netns/6257005b-1394-4296-a1bc-11fea2e740a5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:00:55 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:00:55 crc kubenswrapper[4858]: > pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:00:55 crc kubenswrapper[4858]: E1205 14:00:55.566637 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-9fbw6_openshift-marketplace(9bdceab9-085a-485f-87c3-54a30f6a4b01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-9fbw6_openshift-marketplace(9bdceab9-085a-485f-87c3-54a30f6a4b01)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082\\\" Netns:\\\"/var/run/netns/6257005b-1394-4296-a1bc-11fea2e740a5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=46cb7768005f8cb2cdb29eccbc3f07310a09cfb6d1cce7c18091aee818e9a082;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s\\\": dial tcp 38.102.83.174:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-9fbw6" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" Dec 05 14:00:55 crc kubenswrapper[4858]: I1205 14:00:55.978666 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 14:00:56 crc kubenswrapper[4858]: E1205 14:00:56.183311 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-k2hzq.187e5689e05d74e3 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-k2hzq,UID:461fbf64-d6a9-4371-a580-1d832c1a8a29,APIVersion:v1,ResourceVersion:29567,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 14:00:54.150542563 +0000 UTC m=+262.698140702,LastTimestamp:2025-12-05 14:00:54.150542563 +0000 UTC m=+262.698140702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.321856 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.322581 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.322883 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.323313 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.323722 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.324047 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.431079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-kubelet-dir\") pod \"df3eb38e-7204-4116-9870-a256348a5034\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.431601 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df3eb38e-7204-4116-9870-a256348a5034-kube-api-access\") pod \"df3eb38e-7204-4116-9870-a256348a5034\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.431632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-var-lock\") pod \"df3eb38e-7204-4116-9870-a256348a5034\" (UID: \"df3eb38e-7204-4116-9870-a256348a5034\") " Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.432171 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-var-lock" (OuterVolumeSpecName: "var-lock") pod "df3eb38e-7204-4116-9870-a256348a5034" (UID: "df3eb38e-7204-4116-9870-a256348a5034"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.432209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "df3eb38e-7204-4116-9870-a256348a5034" (UID: "df3eb38e-7204-4116-9870-a256348a5034"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.440403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3eb38e-7204-4116-9870-a256348a5034-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "df3eb38e-7204-4116-9870-a256348a5034" (UID: "df3eb38e-7204-4116-9870-a256348a5034"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.533030 4858 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.533057 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df3eb38e-7204-4116-9870-a256348a5034-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.533068 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/df3eb38e-7204-4116-9870-a256348a5034-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.985979 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"df3eb38e-7204-4116-9870-a256348a5034","Type":"ContainerDied","Data":"073b167f42065ed8c55c799eee7dec9b49a8ab117c00afb28c5e22ca1afb5f27"} Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.986015 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073b167f42065ed8c55c799eee7dec9b49a8ab117c00afb28c5e22ca1afb5f27" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.986064 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.990905 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.991582 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979" exitCode=0 Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.998107 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.998406 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.998567 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.998720 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:56 crc kubenswrapper[4858]: I1205 14:00:56.998918 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.099455 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.101295 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.101949 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.102769 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.103009 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.103189 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.103355 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.103522 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.139921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.139957 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140061 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140293 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140305 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.140314 4858 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:00:57 crc kubenswrapper[4858]: I1205 14:00:57.910061 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.002709 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerStarted","Data":"db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e"} Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.003413 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.003680 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.004148 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.004474 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.004623 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.014528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2hzq" event={"ID":"461fbf64-d6a9-4371-a580-1d832c1a8a29","Type":"ContainerStarted","Data":"d9c7383ac02a3741c18778a0e20e0fea05ba5916b8b8a5c88538030168092fc2"} Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.015282 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.015701 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.016106 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.016309 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.016496 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.016974 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerStarted","Data":"9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20"} Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.017404 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.017592 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.017786 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.017985 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.018194 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.019197 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.019699 4858 scope.go:117] "RemoveContainer" containerID="4932d3fd71c27998dc858d517cea5914ee9b3f4af706103ed8c213de79ea34c3" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.019857 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.023946 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.024113 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.024427 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.024699 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.025080 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.025340 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.881496 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.882127 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.882517 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.882850 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.883110 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.883513 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.886073 4858 scope.go:117] "RemoveContainer" containerID="77171cd959bc643e2d899632190c94ba739dec4a4a2a507b8e81e200dfd6d3a7" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.903062 4858 scope.go:117] "RemoveContainer" containerID="7c3b633554b30eb61d671edfd116f21c497d79238179d243131e32a636c18489" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.919558 4858 scope.go:117] "RemoveContainer" containerID="ab79659eb49610fb12e0a0a89daafb00ad056da40b91817c916d7113740b8695" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.930874 4858 scope.go:117] "RemoveContainer" containerID="a7ab6c653981d1a3e46dde0a6ab819b3ca2a57732958e1b1d21674c54dd4c979" Dec 05 14:00:58 crc kubenswrapper[4858]: I1205 14:00:58.946593 4858 scope.go:117] "RemoveContainer" containerID="15b563882da13c9d5940b587637e5897b043989f4e986427fbf54ad23d82d467" Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.029194 4858 generic.go:334] "Generic (PLEG): container finished" podID="461fbf64-d6a9-4371-a580-1d832c1a8a29" containerID="d9c7383ac02a3741c18778a0e20e0fea05ba5916b8b8a5c88538030168092fc2" exitCode=0 Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.030193 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2hzq" event={"ID":"461fbf64-d6a9-4371-a580-1d832c1a8a29","Type":"ContainerDied","Data":"d9c7383ac02a3741c18778a0e20e0fea05ba5916b8b8a5c88538030168092fc2"} Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.031189 4858 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.031413 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.031565 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.031705 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.031939 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:00:59 crc kubenswrapper[4858]: I1205 14:00:59.032098 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.048004 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2hzq" event={"ID":"461fbf64-d6a9-4371-a580-1d832c1a8a29","Type":"ContainerStarted","Data":"e45ffd679253416485665bc4f8e2bc1d40aedf41bfcc0260d84a04c37c41e46f"} Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.048920 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.049370 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.049595 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.049881 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.050219 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.275113 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.275425 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.315138 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.315737 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.316156 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.316423 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.316672 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.316964 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.467308 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.467677 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.502387 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.502865 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.503206 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.503660 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.503919 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.504171 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.901373 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.901988 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.906784 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.907052 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.907439 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: E1205 14:01:01.990589 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: E1205 14:01:01.990857 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: E1205 14:01:01.991073 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: E1205 14:01:01.991395 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: E1205 14:01:01.991761 4858 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:01 crc kubenswrapper[4858]: I1205 14:01:01.991789 4858 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 05 14:01:01 crc kubenswrapper[4858]: E1205 14:01:01.992014 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.087941 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.088469 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.088755 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.089096 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.089513 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.089774 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.097011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.097534 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.097778 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.098112 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.098470 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: I1205 14:01:02.098697 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:02 crc kubenswrapper[4858]: E1205 14:01:02.192625 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Dec 05 14:01:02 crc kubenswrapper[4858]: E1205 14:01:02.594465 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Dec 05 14:01:03 crc kubenswrapper[4858]: E1205 14:01:03.395196 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Dec 05 14:01:03 crc kubenswrapper[4858]: I1205 14:01:03.676494 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:01:03 crc kubenswrapper[4858]: I1205 14:01:03.676543 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:01:04 crc kubenswrapper[4858]: I1205 14:01:04.719168 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2hzq" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" containerName="registry-server" probeResult="failure" output=< Dec 05 14:01:04 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:01:04 crc kubenswrapper[4858]: > Dec 05 14:01:04 crc kubenswrapper[4858]: E1205 14:01:04.995955 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Dec 05 14:01:06 crc kubenswrapper[4858]: E1205 14:01:06.184153 4858 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-k2hzq.187e5689e05d74e3 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-k2hzq,UID:461fbf64-d6a9-4371-a580-1d832c1a8a29,APIVersion:v1,ResourceVersion:29567,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 14:00:54.150542563 +0000 UTC m=+262.698140702,LastTimestamp:2025-12-05 14:00:54.150542563 +0000 UTC m=+262.698140702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 14:01:07 crc kubenswrapper[4858]: I1205 14:01:07.898697 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:07 crc kubenswrapper[4858]: I1205 14:01:07.899385 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.196931 4858 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="6.4s" Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.427579 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:01:08 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9" Netns:"/var/run/netns/8cf03c4c-5f97-4de9-a9ff-9fad0a4dcbeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:01:08 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:08 crc kubenswrapper[4858]: > Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.427649 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:01:08 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9" Netns:"/var/run/netns/8cf03c4c-5f97-4de9-a9ff-9fad0a4dcbeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:01:08 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:08 crc kubenswrapper[4858]: > pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.427671 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 14:01:08 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9" Netns:"/var/run/netns/8cf03c4c-5f97-4de9-a9ff-9fad0a4dcbeb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s": dial tcp 38.102.83.174:6443: connect: connection refused Dec 05 14:01:08 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:08 crc kubenswrapper[4858]: > pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.427724 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-9fbw6_openshift-marketplace(9bdceab9-085a-485f-87c3-54a30f6a4b01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-9fbw6_openshift-marketplace(9bdceab9-085a-485f-87c3-54a30f6a4b01)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-9fbw6_openshift-marketplace_9bdceab9-085a-485f-87c3-54a30f6a4b01_0(2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9): error adding pod openshift-marketplace_redhat-marketplace-9fbw6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9\\\" Netns:\\\"/var/run/netns/8cf03c4c-5f97-4de9-a9ff-9fad0a4dcbeb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-9fbw6;K8S_POD_INFRA_CONTAINER_ID=2460d9f32ec507a3eb21edbde66f4f3682684316334c92d53ebd28245a1507e9;K8S_POD_UID=9bdceab9-085a-485f-87c3-54a30f6a4b01\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-9fbw6] networking: Multus: [openshift-marketplace/redhat-marketplace-9fbw6/9bdceab9-085a-485f-87c3-54a30f6a4b01]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-9fbw6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fbw6?timeout=1m0s\\\": dial tcp 38.102.83.174:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-9fbw6" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.659650 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.660391 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.660838 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.661149 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.661427 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.661714 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.662018 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.697853 4858 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" volumeName="registry-storage" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.899476 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.900494 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.900843 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.901271 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.901492 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.901747 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.902017 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.913780 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.913990 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:08 crc kubenswrapper[4858]: E1205 14:01:08.914357 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:08 crc kubenswrapper[4858]: I1205 14:01:08.914868 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:08 crc kubenswrapper[4858]: W1205 14:01:08.934005 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5b408e344b9beb3304df1857e0eaa2c77d4ece07694200566c540357cba534b7 WatchSource:0}: Error finding container 5b408e344b9beb3304df1857e0eaa2c77d4ece07694200566c540357cba534b7: Status 404 returned error can't find the container with id 5b408e344b9beb3304df1857e0eaa2c77d4ece07694200566c540357cba534b7 Dec 05 14:01:09 crc kubenswrapper[4858]: I1205 14:01:09.252545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b408e344b9beb3304df1857e0eaa2c77d4ece07694200566c540357cba534b7"} Dec 05 14:01:10 crc kubenswrapper[4858]: I1205 14:01:10.688519 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" containerID="cri-o://ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1" gracePeriod=15 Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.154984 4858 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.155045 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.266461 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.266523 4858 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6" exitCode=1 Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.266558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6"} Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.267090 4858 scope.go:117] "RemoveContainer" containerID="5de1bf22b06843e013c7d318512bda284b1ef81adf2ec9ec1c7fbb9d414e42c6" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.267389 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.267788 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.268060 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.268504 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.269218 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.270733 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.271023 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.903382 4858 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.904875 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.905144 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.905504 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.905781 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.906055 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.906349 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:11 crc kubenswrapper[4858]: I1205 14:01:11.906536 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.154319 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.154725 4858 status_manager.go:851] "Failed to get status for pod" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4zztz\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.154966 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.155187 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.155410 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.155556 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.155698 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.155849 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.156044 4858 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.156207 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.273348 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.274001 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a2b508eba64c8afb5ca7e242d970c17302b3f3ef1ea4668a998a9d085a13934"} Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.275310 4858 status_manager.go:851] "Failed to get status for pod" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4zztz\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276053 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276475 4858 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="237668ad0ed10af5419dfd5f4a2676620f199d85e34cbe7055ea9ff33504de4f" exitCode=0 Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276547 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"237668ad0ed10af5419dfd5f4a2676620f199d85e34cbe7055ea9ff33504de4f"} Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276598 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276866 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276882 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.276914 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277094 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277277 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: E1205 14:01:12.277328 4858 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277544 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277871 4858 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277993 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277934 4858 generic.go:334] "Generic (PLEG): container finished" podID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerID="ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1" exitCode=0 Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.277945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" event={"ID":"065bd27a-40da-4591-82c4-2c1e8717b9d6","Type":"ContainerDied","Data":"ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1"} Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.278186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" event={"ID":"065bd27a-40da-4591-82c4-2c1e8717b9d6","Type":"ContainerDied","Data":"3f4f489c878a690e0dd5072d4f0de0057c429b0a43585d96798e2a1f2a893bf2"} Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.278215 4858 scope.go:117] "RemoveContainer" containerID="ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.278664 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.279038 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.279320 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.279619 4858 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.279947 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.280155 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.280375 4858 status_manager.go:851] "Failed to get status for pod" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4zztz\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.280588 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.280813 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.281100 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.298348 4858 scope.go:117] "RemoveContainer" containerID="ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1" Dec 05 14:01:12 crc kubenswrapper[4858]: E1205 14:01:12.298754 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1\": container with ID starting with ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1 not found: ID does not exist" containerID="ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.298796 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1"} err="failed to get container status \"ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1\": rpc error: code = NotFound desc = could not find container \"ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1\": container with ID starting with ab5eb6a1ac27b2d6dea9a6eb87e24a41a54b59c1f14231f2c3be8059d1a4bef1 not found: ID does not exist" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334400 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-error\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334625 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-ocp-branding-template\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-dir\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334695 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-idp-0-file-data\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334717 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-session\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334748 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-provider-selection\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334785 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-router-certs\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334808 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-cliconfig\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334848 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6dd\" (UniqueName: \"kubernetes.io/projected/065bd27a-40da-4591-82c4-2c1e8717b9d6-kube-api-access-mb6dd\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334876 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-policies\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334928 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-service-ca\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334965 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-trusted-ca-bundle\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.334989 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-serving-cert\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.335022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-login\") pod \"065bd27a-40da-4591-82c4-2c1e8717b9d6\" (UID: \"065bd27a-40da-4591-82c4-2c1e8717b9d6\") " Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.338380 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.339211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.339772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.340769 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.341278 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.341685 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.342146 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.350161 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.350717 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065bd27a-40da-4591-82c4-2c1e8717b9d6-kube-api-access-mb6dd" (OuterVolumeSpecName: "kube-api-access-mb6dd") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "kube-api-access-mb6dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.352221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.362805 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.363227 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.370791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.370942 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "065bd27a-40da-4591-82c4-2c1e8717b9d6" (UID: "065bd27a-40da-4591-82c4-2c1e8717b9d6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.435944 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.435975 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.435985 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6dd\" (UniqueName: \"kubernetes.io/projected/065bd27a-40da-4591-82c4-2c1e8717b9d6-kube-api-access-mb6dd\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.435995 4858 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436003 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436011 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436020 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436029 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436037 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436047 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436055 4858 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/065bd27a-40da-4591-82c4-2c1e8717b9d6-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436065 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436073 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.436082 4858 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/065bd27a-40da-4591-82c4-2c1e8717b9d6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.590886 4858 status_manager.go:851] "Failed to get status for pod" podUID="b5f0906b-baba-4d0c-9303-aaa807285c76" pod="openshift-image-registry/image-registry-66df7c8f76-4nzbm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-4nzbm\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.591172 4858 status_manager.go:851] "Failed to get status for pod" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" pod="openshift-marketplace/redhat-operators-k2hzq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k2hzq\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.591436 4858 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.591653 4858 status_manager.go:851] "Failed to get status for pod" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" pod="openshift-marketplace/certified-operators-4n4r2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4n4r2\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.591974 4858 status_manager.go:851] "Failed to get status for pod" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" pod="openshift-authentication/oauth-openshift-558db77b4-4zztz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4zztz\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.592230 4858 status_manager.go:851] "Failed to get status for pod" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" pod="openshift-marketplace/community-operators-mhrc4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mhrc4\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.592419 4858 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.592616 4858 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:12 crc kubenswrapper[4858]: I1205 14:01:12.592957 4858 status_manager.go:851] "Failed to get status for pod" podUID="df3eb38e-7204-4116-9870-a256348a5034" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 05 14:01:13 crc kubenswrapper[4858]: I1205 14:01:13.295363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2b47125a877a96b3527e2557c696c0f7ecc515f10fcce88e107671de1ea215f"} Dec 05 14:01:13 crc kubenswrapper[4858]: I1205 14:01:13.295666 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"78d66ba6e907881e555b9d0b16a9503c6dafa9098c6ed84d20c58085556c4eff"} Dec 05 14:01:13 crc kubenswrapper[4858]: I1205 14:01:13.759986 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:01:13 crc kubenswrapper[4858]: I1205 14:01:13.836032 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2hzq" Dec 05 14:01:14 crc kubenswrapper[4858]: I1205 14:01:14.308108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aac8419538806c8d586c363fbe1c42468c2316bc823b3554b0a20c65534abae5"} Dec 05 14:01:14 crc kubenswrapper[4858]: I1205 14:01:14.308404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"92a19f3de996a6fad92d350f9beba944441be1ae8781fe4c2caeff2cd2ec1bfc"} Dec 05 14:01:14 crc kubenswrapper[4858]: I1205 14:01:14.308420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3226e981d08f9acaea6592c516862315d086e31bc99a03a9453d443052df6f46"} Dec 05 14:01:14 crc kubenswrapper[4858]: I1205 14:01:14.308445 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:14 crc kubenswrapper[4858]: I1205 14:01:14.308464 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:17 crc kubenswrapper[4858]: I1205 14:01:17.488642 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 14:01:17 crc kubenswrapper[4858]: I1205 14:01:17.498291 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 14:01:18 crc kubenswrapper[4858]: I1205 14:01:18.340549 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 14:01:18 crc kubenswrapper[4858]: I1205 14:01:18.915546 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:18 crc kubenswrapper[4858]: I1205 14:01:18.915619 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:18 crc kubenswrapper[4858]: I1205 14:01:18.915631 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:18 crc kubenswrapper[4858]: I1205 14:01:18.920860 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:19 crc kubenswrapper[4858]: I1205 14:01:19.768833 4858 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:19 crc kubenswrapper[4858]: I1205 14:01:19.831454 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="93647780-ede8-4b1f-8be8-26f20389858b" Dec 05 14:01:20 crc kubenswrapper[4858]: I1205 14:01:20.351010 4858 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:20 crc kubenswrapper[4858]: I1205 14:01:20.351041 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4ee8667d-c367-46b9-8b51-335c4325c6ab" Dec 05 14:01:20 crc kubenswrapper[4858]: I1205 14:01:20.354671 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="93647780-ede8-4b1f-8be8-26f20389858b" Dec 05 14:01:20 crc kubenswrapper[4858]: I1205 14:01:20.898841 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:20 crc kubenswrapper[4858]: I1205 14:01:20.899308 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:21 crc kubenswrapper[4858]: I1205 14:01:21.160942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 14:01:22 crc kubenswrapper[4858]: I1205 14:01:22.361170 4858 generic.go:334] "Generic (PLEG): container finished" podID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerID="7d7aae8fbc2a9de891e3870491a51a452261f8c865568b15f03d0e60774d0206" exitCode=0 Dec 05 14:01:22 crc kubenswrapper[4858]: I1205 14:01:22.361234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fbw6" event={"ID":"9bdceab9-085a-485f-87c3-54a30f6a4b01","Type":"ContainerDied","Data":"7d7aae8fbc2a9de891e3870491a51a452261f8c865568b15f03d0e60774d0206"} Dec 05 14:01:22 crc kubenswrapper[4858]: I1205 14:01:22.362266 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fbw6" event={"ID":"9bdceab9-085a-485f-87c3-54a30f6a4b01","Type":"ContainerStarted","Data":"2604d4c6fa53056e60353186a148349ccd51acb992f73241128be6260cd175f2"} Dec 05 14:01:23 crc kubenswrapper[4858]: I1205 14:01:23.368853 4858 generic.go:334] "Generic (PLEG): container finished" podID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerID="5dba2e12b8ac13b7d672024ea501cbe184933891e15526e62424d7dae1e57d03" exitCode=0 Dec 05 14:01:23 crc kubenswrapper[4858]: I1205 14:01:23.368938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fbw6" event={"ID":"9bdceab9-085a-485f-87c3-54a30f6a4b01","Type":"ContainerDied","Data":"5dba2e12b8ac13b7d672024ea501cbe184933891e15526e62424d7dae1e57d03"} Dec 05 14:01:24 crc kubenswrapper[4858]: I1205 14:01:24.375680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fbw6" event={"ID":"9bdceab9-085a-485f-87c3-54a30f6a4b01","Type":"ContainerStarted","Data":"6b99a21c2482afc4af0fd96ee3497b0d85234becac72fe662c6b4438a4519361"} Dec 05 14:01:28 crc kubenswrapper[4858]: I1205 14:01:28.877122 4858 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podf4b27818a5e8e43d0dc095d08835c792"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podf4b27818a5e8e43d0dc095d08835c792] : Timed out while waiting for systemd to remove kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice" Dec 05 14:01:29 crc kubenswrapper[4858]: I1205 14:01:29.584724 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 14:01:30 crc kubenswrapper[4858]: I1205 14:01:30.213788 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 14:01:30 crc kubenswrapper[4858]: I1205 14:01:30.314076 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 14:01:30 crc kubenswrapper[4858]: I1205 14:01:30.603435 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 14:01:31 crc kubenswrapper[4858]: I1205 14:01:31.190993 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.191896 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.227978 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.256443 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.770151 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.792769 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.882342 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 14:01:32 crc kubenswrapper[4858]: I1205 14:01:32.954195 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.317756 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.350764 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.374942 4858 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.559705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.859791 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.886056 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.886358 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.894962 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.897032 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 14:01:33 crc kubenswrapper[4858]: I1205 14:01:33.932580 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.068781 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.079700 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.235813 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.315962 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.468236 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.576667 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.636971 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.642605 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.855799 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.896035 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.960906 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 14:01:34 crc kubenswrapper[4858]: I1205 14:01:34.974464 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.002427 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.043696 4858 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.188420 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.212285 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.371109 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.412260 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.450646 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.504696 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.595930 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.653017 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.731454 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.742625 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.790718 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.800993 4858 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 14:01:35 crc kubenswrapper[4858]: I1205 14:01:35.822923 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.040217 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.200917 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.361021 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.428049 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.557116 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.658018 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.691853 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.722021 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.828265 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.928352 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 14:01:36 crc kubenswrapper[4858]: I1205 14:01:36.974868 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 14:01:37 crc kubenswrapper[4858]: I1205 14:01:37.088684 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 14:01:37 crc kubenswrapper[4858]: I1205 14:01:37.180643 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 14:01:37 crc kubenswrapper[4858]: I1205 14:01:37.394092 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 14:01:37 crc kubenswrapper[4858]: I1205 14:01:37.489446 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 14:01:37 crc kubenswrapper[4858]: I1205 14:01:37.772306 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.118922 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.209027 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.240222 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.339785 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.435790 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.454528 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.523705 4858 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.524414 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhrc4" podStartSLOduration=44.212580775 podStartE2EDuration="48.524380245s" podCreationTimestamp="2025-12-05 14:00:50 +0000 UTC" firstStartedPulling="2025-12-05 14:00:52.931275139 +0000 UTC m=+261.478873278" lastFinishedPulling="2025-12-05 14:00:57.243074609 +0000 UTC m=+265.790672748" observedRunningTime="2025-12-05 14:01:19.923363242 +0000 UTC m=+288.470961411" watchObservedRunningTime="2025-12-05 14:01:38.524380245 +0000 UTC m=+307.071978384" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.524877 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4n4r2" podStartSLOduration=43.699165426 podStartE2EDuration="47.524870519s" podCreationTimestamp="2025-12-05 14:00:51 +0000 UTC" firstStartedPulling="2025-12-05 14:00:52.935973791 +0000 UTC m=+261.483571930" lastFinishedPulling="2025-12-05 14:00:56.761678884 +0000 UTC m=+265.309277023" observedRunningTime="2025-12-05 14:01:19.850698754 +0000 UTC m=+288.398296893" watchObservedRunningTime="2025-12-05 14:01:38.524870519 +0000 UTC m=+307.072468658" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.526048 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.526036972 podStartE2EDuration="44.526036972s" podCreationTimestamp="2025-12-05 14:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:01:19.932920201 +0000 UTC m=+288.480518340" watchObservedRunningTime="2025-12-05 14:01:38.526036972 +0000 UTC m=+307.073635111" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.526806 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9fbw6" podStartSLOduration=44.119704669 podStartE2EDuration="45.526798313s" podCreationTimestamp="2025-12-05 14:00:53 +0000 UTC" firstStartedPulling="2025-12-05 14:01:22.363458986 +0000 UTC m=+290.911057135" lastFinishedPulling="2025-12-05 14:01:23.77055264 +0000 UTC m=+292.318150779" observedRunningTime="2025-12-05 14:01:24.389593776 +0000 UTC m=+292.937191915" watchObservedRunningTime="2025-12-05 14:01:38.526798313 +0000 UTC m=+307.074396452" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.529864 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2hzq" podStartSLOduration=40.591949684 podStartE2EDuration="45.529846148s" podCreationTimestamp="2025-12-05 14:00:53 +0000 UTC" firstStartedPulling="2025-12-05 14:00:54.972474851 +0000 UTC m=+263.520072990" lastFinishedPulling="2025-12-05 14:00:59.910371315 +0000 UTC m=+268.457969454" observedRunningTime="2025-12-05 14:01:19.828490761 +0000 UTC m=+288.376088900" watchObservedRunningTime="2025-12-05 14:01:38.529846148 +0000 UTC m=+307.077444307" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.530971 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-4zztz"] Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.531082 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.531121 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fbw6"] Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.531157 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-trcq9"] Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.534612 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.573289 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.573258176 podStartE2EDuration="19.573258176s" podCreationTimestamp="2025-12-05 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:01:38.569955334 +0000 UTC m=+307.117553473" watchObservedRunningTime="2025-12-05 14:01:38.573258176 +0000 UTC m=+307.120856325" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.603438 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.623787 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.627315 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.684500 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.757003 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.920350 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:38 crc kubenswrapper[4858]: I1205 14:01:38.920775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.016930 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.074904 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.127379 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.128292 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.133100 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.159862 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.220802 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.431602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.562916 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.673052 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.677590 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.827105 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.909295 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" path="/var/lib/kubelet/pods/065bd27a-40da-4591-82c4-2c1e8717b9d6/volumes" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.968518 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 14:01:39 crc kubenswrapper[4858]: I1205 14:01:39.981644 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.260619 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.448260 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.566528 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.601314 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.630583 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.740380 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-748578cd96-nlm54"] Dec 05 14:01:40 crc kubenswrapper[4858]: E1205 14:01:40.740641 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df3eb38e-7204-4116-9870-a256348a5034" containerName="installer" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.740654 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="df3eb38e-7204-4116-9870-a256348a5034" containerName="installer" Dec 05 14:01:40 crc kubenswrapper[4858]: E1205 14:01:40.740666 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.740682 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.740785 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="065bd27a-40da-4591-82c4-2c1e8717b9d6" containerName="oauth-openshift" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.740801 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="df3eb38e-7204-4116-9870-a256348a5034" containerName="installer" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.741296 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.748701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.748945 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.749783 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.750319 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.750644 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.750780 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.751003 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753100 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753131 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753300 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753304 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-audit-policies\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753341 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753368 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e81e683d-b55e-4076-b333-4e68d8caed3c-audit-dir\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753408 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-session\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753444 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-login\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7hh\" (UniqueName: \"kubernetes.io/projected/e81e683d-b55e-4076-b333-4e68d8caed3c-kube-api-access-kh7hh\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753526 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-error\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753607 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.753757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.754435 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.754741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.754981 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.761083 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.768666 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.774906 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-748578cd96-nlm54"] Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.776255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855118 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855182 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855237 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855364 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-audit-policies\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e81e683d-b55e-4076-b333-4e68d8caed3c-audit-dir\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-session\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855456 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-login\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7hh\" (UniqueName: \"kubernetes.io/projected/e81e683d-b55e-4076-b333-4e68d8caed3c-kube-api-access-kh7hh\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.855502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-error\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.856477 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e81e683d-b55e-4076-b333-4e68d8caed3c-audit-dir\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.857366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.857436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-audit-policies\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.862495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.863387 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.863815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.863955 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.864469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.864554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-error\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.865092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-user-template-login\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.865506 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.865971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-session\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.877445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e81e683d-b55e-4076-b333-4e68d8caed3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.881081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7hh\" (UniqueName: \"kubernetes.io/projected/e81e683d-b55e-4076-b333-4e68d8caed3c-kube-api-access-kh7hh\") pod \"oauth-openshift-748578cd96-nlm54\" (UID: \"e81e683d-b55e-4076-b333-4e68d8caed3c\") " pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.915029 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 14:01:40 crc kubenswrapper[4858]: I1205 14:01:40.950701 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.019340 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.069010 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.121899 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.221145 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.234051 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.272934 4858 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.273147 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b2ade3a7417fc889eb651ff30d52c812803d3bfe2784166954c9ade5da707cfc" gracePeriod=5 Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.306586 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.359720 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.953383 4858 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 14:01:41 crc kubenswrapper[4858]: I1205 14:01:41.972723 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.016711 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.024587 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.027287 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.105587 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.161741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.164383 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.218207 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.238218 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.296314 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.353978 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.396771 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.434710 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.441777 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.508115 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.512134 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.587360 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.681765 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.683662 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.733180 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.844885 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 14:01:42 crc kubenswrapper[4858]: I1205 14:01:42.871711 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.022439 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.069947 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.084872 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.163122 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.259392 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.268969 4858 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.369595 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.450047 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.480799 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.600912 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.611376 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.652240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.704886 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.813063 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 14:01:43 crc kubenswrapper[4858]: E1205 14:01:43.948108 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:01:43 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e" Netns:"/var/run/netns/a07b5e82-0e64-4959-81cd-669e8c432609" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:01:43 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:43 crc kubenswrapper[4858]: > Dec 05 14:01:43 crc kubenswrapper[4858]: E1205 14:01:43.948194 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:01:43 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e" Netns:"/var/run/netns/a07b5e82-0e64-4959-81cd-669e8c432609" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:01:43 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:43 crc kubenswrapper[4858]: > pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:43 crc kubenswrapper[4858]: E1205 14:01:43.948225 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 14:01:43 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e" Netns:"/var/run/netns/a07b5e82-0e64-4959-81cd-669e8c432609" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:01:43 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:43 crc kubenswrapper[4858]: > pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:43 crc kubenswrapper[4858]: E1205 14:01:43.948291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-748578cd96-nlm54_openshift-authentication(e81e683d-b55e-4076-b333-4e68d8caed3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-748578cd96-nlm54_openshift-authentication(e81e683d-b55e-4076-b333-4e68d8caed3c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e\\\" Netns:\\\"/var/run/netns/a07b5e82-0e64-4959-81cd-669e8c432609\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=a451d5f32e25447b5511f410bd694de3c3bed4d6a5ed64bc331b25635124133e;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod \\\"oauth-openshift-748578cd96-nlm54\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" Dec 05 14:01:43 crc kubenswrapper[4858]: I1205 14:01:43.965357 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.071786 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.077881 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.111325 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.192671 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.323372 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.483492 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.484252 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.672984 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 14:01:44 crc kubenswrapper[4858]: I1205 14:01:44.961364 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 14:01:45 crc kubenswrapper[4858]: I1205 14:01:45.035709 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 14:01:45 crc kubenswrapper[4858]: I1205 14:01:45.593343 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 14:01:46 crc kubenswrapper[4858]: I1205 14:01:46.219235 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 14:01:46 crc kubenswrapper[4858]: I1205 14:01:46.495459 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 14:01:46 crc kubenswrapper[4858]: I1205 14:01:46.495506 4858 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b2ade3a7417fc889eb651ff30d52c812803d3bfe2784166954c9ade5da707cfc" exitCode=137 Dec 05 14:01:46 crc kubenswrapper[4858]: I1205 14:01:46.847992 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 14:01:46 crc kubenswrapper[4858]: I1205 14:01:46.848140 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.040707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.040876 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.040892 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.040940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.040967 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041536 4858 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041553 4858 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041564 4858 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.041575 4858 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.053276 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.143064 4858 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.502464 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.502811 4858 scope.go:117] "RemoveContainer" containerID="b2ade3a7417fc889eb651ff30d52c812803d3bfe2784166954c9ade5da707cfc" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.502900 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 14:01:47 crc kubenswrapper[4858]: E1205 14:01:47.536005 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:01:47 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb" Netns:"/var/run/netns/6cd1ab85-a3b0-44c1-b85d-928aec809ad4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:01:47 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:47 crc kubenswrapper[4858]: > Dec 05 14:01:47 crc kubenswrapper[4858]: E1205 14:01:47.536065 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:01:47 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb" Netns:"/var/run/netns/6cd1ab85-a3b0-44c1-b85d-928aec809ad4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:01:47 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:47 crc kubenswrapper[4858]: > pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:47 crc kubenswrapper[4858]: E1205 14:01:47.536084 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 14:01:47 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb" Netns:"/var/run/netns/6cd1ab85-a3b0-44c1-b85d-928aec809ad4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:01:47 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:01:47 crc kubenswrapper[4858]: > pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:47 crc kubenswrapper[4858]: E1205 14:01:47.536183 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-748578cd96-nlm54_openshift-authentication(e81e683d-b55e-4076-b333-4e68d8caed3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-748578cd96-nlm54_openshift-authentication(e81e683d-b55e-4076-b333-4e68d8caed3c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb\\\" Netns:\\\"/var/run/netns/6cd1ab85-a3b0-44c1-b85d-928aec809ad4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=d1ce68c1f68f37766012348cfc3df0ac301dd76f6806a5d73e26ab4099f0c6bb;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod \\\"oauth-openshift-748578cd96-nlm54\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.905179 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.905431 4858 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.915677 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.915718 4858 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4a589210-9707-43a8-b2cc-87e4924c3034" Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.918698 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 14:01:47 crc kubenswrapper[4858]: I1205 14:01:47.918722 4858 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4a589210-9707-43a8-b2cc-87e4924c3034" Dec 05 14:01:54 crc kubenswrapper[4858]: I1205 14:01:54.966726 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 14:01:56 crc kubenswrapper[4858]: I1205 14:01:56.103298 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 14:01:57 crc kubenswrapper[4858]: I1205 14:01:57.270360 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 14:01:57 crc kubenswrapper[4858]: I1205 14:01:57.915688 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 14:01:58 crc kubenswrapper[4858]: I1205 14:01:58.036239 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 14:01:59 crc kubenswrapper[4858]: I1205 14:01:59.040915 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 14:01:59 crc kubenswrapper[4858]: I1205 14:01:59.381064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 14:01:59 crc kubenswrapper[4858]: I1205 14:01:59.898638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:59 crc kubenswrapper[4858]: I1205 14:01:59.899178 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:01:59 crc kubenswrapper[4858]: I1205 14:01:59.935720 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 14:02:00 crc kubenswrapper[4858]: I1205 14:02:00.410219 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 14:02:01 crc kubenswrapper[4858]: I1205 14:02:01.020132 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 14:02:01 crc kubenswrapper[4858]: I1205 14:02:01.027180 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 14:02:01 crc kubenswrapper[4858]: I1205 14:02:01.384736 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 14:02:01 crc kubenswrapper[4858]: I1205 14:02:01.386750 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 14:02:01 crc kubenswrapper[4858]: I1205 14:02:01.408070 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 14:02:02 crc kubenswrapper[4858]: I1205 14:02:02.067045 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 14:02:02 crc kubenswrapper[4858]: I1205 14:02:02.644940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 14:02:02 crc kubenswrapper[4858]: E1205 14:02:02.770642 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:02:02 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c" Netns:"/var/run/netns/b41ea7d8-d0e1-43eb-aaf6-1d80d78002d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:02:02 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:02:02 crc kubenswrapper[4858]: > Dec 05 14:02:02 crc kubenswrapper[4858]: E1205 14:02:02.770707 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:02:02 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c" Netns:"/var/run/netns/b41ea7d8-d0e1-43eb-aaf6-1d80d78002d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:02:02 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:02:02 crc kubenswrapper[4858]: > pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:02:02 crc kubenswrapper[4858]: E1205 14:02:02.770734 4858 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 05 14:02:02 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c" Netns:"/var/run/netns/b41ea7d8-d0e1-43eb-aaf6-1d80d78002d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod "oauth-openshift-748578cd96-nlm54" not found Dec 05 14:02:02 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:02:02 crc kubenswrapper[4858]: > pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:02:02 crc kubenswrapper[4858]: E1205 14:02:02.770795 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-748578cd96-nlm54_openshift-authentication(e81e683d-b55e-4076-b333-4e68d8caed3c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-748578cd96-nlm54_openshift-authentication(e81e683d-b55e-4076-b333-4e68d8caed3c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-748578cd96-nlm54_openshift-authentication_e81e683d-b55e-4076-b333-4e68d8caed3c_0(fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c): error adding pod openshift-authentication_oauth-openshift-748578cd96-nlm54 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c\\\" Netns:\\\"/var/run/netns/b41ea7d8-d0e1-43eb-aaf6-1d80d78002d5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-748578cd96-nlm54;K8S_POD_INFRA_CONTAINER_ID=fbb3b2181e52c3adf563b5c103524620fe86af87ece112e4c33eeb769793cd1c;K8S_POD_UID=e81e683d-b55e-4076-b333-4e68d8caed3c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-748578cd96-nlm54] networking: Multus: [openshift-authentication/oauth-openshift-748578cd96-nlm54/e81e683d-b55e-4076-b333-4e68d8caed3c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-748578cd96-nlm54 in out of cluster comm: pod \\\"oauth-openshift-748578cd96-nlm54\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" Dec 05 14:02:02 crc kubenswrapper[4858]: I1205 14:02:02.954004 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 14:02:03 crc kubenswrapper[4858]: I1205 14:02:03.137519 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 14:02:03 crc kubenswrapper[4858]: I1205 14:02:03.545743 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 14:02:03 crc kubenswrapper[4858]: I1205 14:02:03.575975 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" podUID="17d98864-f8cf-4f61-9707-30871521a9f2" containerName="registry" containerID="cri-o://370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba" gracePeriod=30 Dec 05 14:02:03 crc kubenswrapper[4858]: I1205 14:02:03.852998 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 14:02:03 crc kubenswrapper[4858]: I1205 14:02:03.876997 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 14:02:03 crc kubenswrapper[4858]: I1205 14:02:03.926787 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.414325 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.437451 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.477705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.491771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-trusted-ca\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.491839 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4t4\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-kube-api-access-nb4t4\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.491867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17d98864-f8cf-4f61-9707-30871521a9f2-ca-trust-extracted\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.492085 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.492116 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-bound-sa-token\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.492168 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-registry-tls\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.492197 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17d98864-f8cf-4f61-9707-30871521a9f2-installation-pull-secrets\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.492244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-registry-certificates\") pod \"17d98864-f8cf-4f61-9707-30871521a9f2\" (UID: \"17d98864-f8cf-4f61-9707-30871521a9f2\") " Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.493448 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.493559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.501142 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.501668 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d98864-f8cf-4f61-9707-30871521a9f2-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.501714 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-kube-api-access-nb4t4" (OuterVolumeSpecName: "kube-api-access-nb4t4") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "kube-api-access-nb4t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.502118 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.507036 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.510940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17d98864-f8cf-4f61-9707-30871521a9f2-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "17d98864-f8cf-4f61-9707-30871521a9f2" (UID: "17d98864-f8cf-4f61-9707-30871521a9f2"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593376 4858 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593408 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17d98864-f8cf-4f61-9707-30871521a9f2-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593420 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4t4\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-kube-api-access-nb4t4\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593430 4858 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/17d98864-f8cf-4f61-9707-30871521a9f2-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593441 4858 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593450 4858 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/17d98864-f8cf-4f61-9707-30871521a9f2-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.593459 4858 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/17d98864-f8cf-4f61-9707-30871521a9f2-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.605498 4858 generic.go:334] "Generic (PLEG): container finished" podID="17d98864-f8cf-4f61-9707-30871521a9f2" containerID="370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba" exitCode=0 Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.605551 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.605533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" event={"ID":"17d98864-f8cf-4f61-9707-30871521a9f2","Type":"ContainerDied","Data":"370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba"} Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.605605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-trcq9" event={"ID":"17d98864-f8cf-4f61-9707-30871521a9f2","Type":"ContainerDied","Data":"d4b64f1f9d37d93846495fc1d90e0b7576d44d87f0e2b855e10a628b9899a418"} Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.605628 4858 scope.go:117] "RemoveContainer" containerID="370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.621710 4858 scope.go:117] "RemoveContainer" containerID="370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba" Dec 05 14:02:04 crc kubenswrapper[4858]: E1205 14:02:04.622198 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba\": container with ID starting with 370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba not found: ID does not exist" containerID="370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.622225 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba"} err="failed to get container status \"370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba\": rpc error: code = NotFound desc = could not find container \"370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba\": container with ID starting with 370fcc90a62dde8e1f2eaa685d3a0cc5fdd5a617b11ec0dfa549c6366d0a6eba not found: ID does not exist" Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.631020 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-trcq9"] Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.636063 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-trcq9"] Dec 05 14:02:04 crc kubenswrapper[4858]: I1205 14:02:04.970466 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 14:02:05 crc kubenswrapper[4858]: I1205 14:02:05.071788 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 14:02:05 crc kubenswrapper[4858]: I1205 14:02:05.273526 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 14:02:05 crc kubenswrapper[4858]: I1205 14:02:05.277318 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 14:02:05 crc kubenswrapper[4858]: I1205 14:02:05.615928 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 14:02:05 crc kubenswrapper[4858]: I1205 14:02:05.909181 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d98864-f8cf-4f61-9707-30871521a9f2" path="/var/lib/kubelet/pods/17d98864-f8cf-4f61-9707-30871521a9f2/volumes" Dec 05 14:02:06 crc kubenswrapper[4858]: I1205 14:02:06.263031 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 14:02:06 crc kubenswrapper[4858]: I1205 14:02:06.308493 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 14:02:06 crc kubenswrapper[4858]: I1205 14:02:06.546035 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 14:02:06 crc kubenswrapper[4858]: I1205 14:02:06.561783 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 14:02:06 crc kubenswrapper[4858]: I1205 14:02:06.792371 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 14:02:06 crc kubenswrapper[4858]: I1205 14:02:06.946678 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 14:02:07 crc kubenswrapper[4858]: I1205 14:02:07.041105 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 14:02:07 crc kubenswrapper[4858]: I1205 14:02:07.109498 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 14:02:07 crc kubenswrapper[4858]: I1205 14:02:07.114261 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 14:02:07 crc kubenswrapper[4858]: I1205 14:02:07.706476 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 14:02:07 crc kubenswrapper[4858]: I1205 14:02:07.744247 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 14:02:08 crc kubenswrapper[4858]: I1205 14:02:08.173525 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 14:02:08 crc kubenswrapper[4858]: I1205 14:02:08.380563 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 14:02:08 crc kubenswrapper[4858]: I1205 14:02:08.600662 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 14:02:08 crc kubenswrapper[4858]: I1205 14:02:08.684687 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.126706 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.153980 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.384867 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.402424 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.483251 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.646812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 14:02:09 crc kubenswrapper[4858]: I1205 14:02:09.714416 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 14:02:11 crc kubenswrapper[4858]: I1205 14:02:11.152334 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 14:02:11 crc kubenswrapper[4858]: I1205 14:02:11.406428 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 14:02:11 crc kubenswrapper[4858]: I1205 14:02:11.625508 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 14:02:11 crc kubenswrapper[4858]: I1205 14:02:11.821857 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 14:02:12 crc kubenswrapper[4858]: I1205 14:02:12.348233 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 14:02:12 crc kubenswrapper[4858]: I1205 14:02:12.980699 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 14:02:13 crc kubenswrapper[4858]: I1205 14:02:13.278741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 14:02:13 crc kubenswrapper[4858]: I1205 14:02:13.419020 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 14:02:13 crc kubenswrapper[4858]: I1205 14:02:13.432105 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 14:02:13 crc kubenswrapper[4858]: I1205 14:02:13.481604 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 14:02:13 crc kubenswrapper[4858]: I1205 14:02:13.959724 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 14:02:14 crc kubenswrapper[4858]: I1205 14:02:14.237341 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 14:02:14 crc kubenswrapper[4858]: I1205 14:02:14.265204 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 14:02:14 crc kubenswrapper[4858]: I1205 14:02:14.683305 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 14:02:15 crc kubenswrapper[4858]: I1205 14:02:15.007717 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 14:02:15 crc kubenswrapper[4858]: I1205 14:02:15.404497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 14:02:15 crc kubenswrapper[4858]: I1205 14:02:15.611687 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 14:02:15 crc kubenswrapper[4858]: I1205 14:02:15.713686 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 14:02:16 crc kubenswrapper[4858]: I1205 14:02:16.476964 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 14:02:16 crc kubenswrapper[4858]: I1205 14:02:16.899005 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:02:16 crc kubenswrapper[4858]: I1205 14:02:16.900154 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.094512 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.182096 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-748578cd96-nlm54"] Dec 05 14:02:17 crc kubenswrapper[4858]: W1205 14:02:17.193420 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode81e683d_b55e_4076_b333_4e68d8caed3c.slice/crio-0979fb60dd6840bd60fb122d4d783bd3ed5930a098cac6827dbf90e7356d9e4d WatchSource:0}: Error finding container 0979fb60dd6840bd60fb122d4d783bd3ed5930a098cac6827dbf90e7356d9e4d: Status 404 returned error can't find the container with id 0979fb60dd6840bd60fb122d4d783bd3ed5930a098cac6827dbf90e7356d9e4d Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.260741 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.451171 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.691421 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" event={"ID":"e81e683d-b55e-4076-b333-4e68d8caed3c","Type":"ContainerStarted","Data":"628c28a71c96308f3626201d8d7aee781a0c8fa9fa268e3c311e5b9ebf668ae9"} Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.691466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" event={"ID":"e81e683d-b55e-4076-b333-4e68d8caed3c","Type":"ContainerStarted","Data":"0979fb60dd6840bd60fb122d4d783bd3ed5930a098cac6827dbf90e7356d9e4d"} Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.692459 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.770403 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.868537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 14:02:17 crc kubenswrapper[4858]: I1205 14:02:17.890118 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podStartSLOduration=92.890096475 podStartE2EDuration="1m32.890096475s" podCreationTimestamp="2025-12-05 14:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:02:17.714584561 +0000 UTC m=+346.262182700" watchObservedRunningTime="2025-12-05 14:02:17.890096475 +0000 UTC m=+346.437694624" Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.004680 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.008885 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.095797 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.202871 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.704392 4858 generic.go:334] "Generic (PLEG): container finished" podID="ff2db84d-03a9-4c8e-9584-aeafa84ead17" containerID="d28d165b0b7bddf89957c7f840bda46f2752488e5c295169884323a7cf2274c1" exitCode=0 Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.704469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" event={"ID":"ff2db84d-03a9-4c8e-9584-aeafa84ead17","Type":"ContainerDied","Data":"d28d165b0b7bddf89957c7f840bda46f2752488e5c295169884323a7cf2274c1"} Dec 05 14:02:19 crc kubenswrapper[4858]: I1205 14:02:19.705125 4858 scope.go:117] "RemoveContainer" containerID="d28d165b0b7bddf89957c7f840bda46f2752488e5c295169884323a7cf2274c1" Dec 05 14:02:20 crc kubenswrapper[4858]: I1205 14:02:20.310101 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 14:02:20 crc kubenswrapper[4858]: I1205 14:02:20.473889 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 14:02:20 crc kubenswrapper[4858]: I1205 14:02:20.711229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" event={"ID":"ff2db84d-03a9-4c8e-9584-aeafa84ead17","Type":"ContainerStarted","Data":"a0217344fd6a282192955a01686684c25e8410ebba012e5fbda8e03de92b766b"} Dec 05 14:02:20 crc kubenswrapper[4858]: I1205 14:02:20.711398 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 14:02:20 crc kubenswrapper[4858]: I1205 14:02:20.711564 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:02:20 crc kubenswrapper[4858]: I1205 14:02:20.713448 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" Dec 05 14:02:22 crc kubenswrapper[4858]: I1205 14:02:22.086530 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 14:02:22 crc kubenswrapper[4858]: I1205 14:02:22.122224 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 14:02:22 crc kubenswrapper[4858]: I1205 14:02:22.407738 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 14:02:23 crc kubenswrapper[4858]: I1205 14:02:23.286058 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 14:02:24 crc kubenswrapper[4858]: I1205 14:02:24.542736 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 14:02:27 crc kubenswrapper[4858]: I1205 14:02:27.238289 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 14:02:27 crc kubenswrapper[4858]: I1205 14:02:27.737537 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 14:02:44 crc kubenswrapper[4858]: I1205 14:02:44.760528 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:02:44 crc kubenswrapper[4858]: I1205 14:02:44.761520 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:03:10 crc kubenswrapper[4858]: I1205 14:03:10.880256 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfbnh"] Dec 05 14:03:10 crc kubenswrapper[4858]: I1205 14:03:10.880958 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" containerID="cri-o://8becbb2396401ed0934e50dc005e80887958a9d2ea3aa1da13e5ae8d6958016d" gracePeriod=30 Dec 05 14:03:10 crc kubenswrapper[4858]: I1205 14:03:10.984073 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn"] Dec 05 14:03:10 crc kubenswrapper[4858]: I1205 14:03:10.984608 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" containerID="cri-o://4773bc3f859946bfbac6df391c21116ff32b4b19a5f674f13371c0fd7523ba7e" gracePeriod=30 Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.061639 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerID="8becbb2396401ed0934e50dc005e80887958a9d2ea3aa1da13e5ae8d6958016d" exitCode=0 Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.061731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" event={"ID":"ee76bb43-a079-4631-aace-ba93a4e04e4a","Type":"ContainerDied","Data":"8becbb2396401ed0934e50dc005e80887958a9d2ea3aa1da13e5ae8d6958016d"} Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.063771 4858 generic.go:334] "Generic (PLEG): container finished" podID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerID="4773bc3f859946bfbac6df391c21116ff32b4b19a5f674f13371c0fd7523ba7e" exitCode=0 Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.063799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" event={"ID":"20f59d96-5524-4b11-ac3b-b2634f94b6f7","Type":"ContainerDied","Data":"4773bc3f859946bfbac6df391c21116ff32b4b19a5f674f13371c0fd7523ba7e"} Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.335913 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.365707 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-697b597b79-pzzs5"] Dec 05 14:03:12 crc kubenswrapper[4858]: E1205 14:03:12.365998 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366010 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 14:03:12 crc kubenswrapper[4858]: E1205 14:03:12.366024 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d98864-f8cf-4f61-9707-30871521a9f2" containerName="registry" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366049 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d98864-f8cf-4f61-9707-30871521a9f2" containerName="registry" Dec 05 14:03:12 crc kubenswrapper[4858]: E1205 14:03:12.366057 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366063 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366154 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" containerName="controller-manager" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366168 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366176 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d98864-f8cf-4f61-9707-30871521a9f2" containerName="registry" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.366563 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.377028 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-697b597b79-pzzs5"] Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.402323 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-client-ca\") pod \"ee76bb43-a079-4631-aace-ba93a4e04e4a\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.402449 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-proxy-ca-bundles\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.402497 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17ed58ca-3d70-466f-95c9-db0b00258e6f-serving-cert\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.402520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-client-ca\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.402620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtnq\" (UniqueName: \"kubernetes.io/projected/17ed58ca-3d70-466f-95c9-db0b00258e6f-kube-api-access-nvtnq\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.402707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-config\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.403187 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee76bb43-a079-4631-aace-ba93a4e04e4a" (UID: "ee76bb43-a079-4631-aace-ba93a4e04e4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.454481 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.503641 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-config\") pod \"ee76bb43-a079-4631-aace-ba93a4e04e4a\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.503726 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzpp2\" (UniqueName: \"kubernetes.io/projected/ee76bb43-a079-4631-aace-ba93a4e04e4a-kube-api-access-dzpp2\") pod \"ee76bb43-a079-4631-aace-ba93a4e04e4a\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.503756 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-proxy-ca-bundles\") pod \"ee76bb43-a079-4631-aace-ba93a4e04e4a\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.503786 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76bb43-a079-4631-aace-ba93a4e04e4a-serving-cert\") pod \"ee76bb43-a079-4631-aace-ba93a4e04e4a\" (UID: \"ee76bb43-a079-4631-aace-ba93a4e04e4a\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.504127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17ed58ca-3d70-466f-95c9-db0b00258e6f-serving-cert\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.504159 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-client-ca\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.504185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtnq\" (UniqueName: \"kubernetes.io/projected/17ed58ca-3d70-466f-95c9-db0b00258e6f-kube-api-access-nvtnq\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.504244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-config\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.504270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-proxy-ca-bundles\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.504307 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.505773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-config" (OuterVolumeSpecName: "config") pod "ee76bb43-a079-4631-aace-ba93a4e04e4a" (UID: "ee76bb43-a079-4631-aace-ba93a4e04e4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.506074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-proxy-ca-bundles\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.506349 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-client-ca\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.506517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ee76bb43-a079-4631-aace-ba93a4e04e4a" (UID: "ee76bb43-a079-4631-aace-ba93a4e04e4a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.508396 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-config\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.510732 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17ed58ca-3d70-466f-95c9-db0b00258e6f-serving-cert\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.516543 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee76bb43-a079-4631-aace-ba93a4e04e4a-kube-api-access-dzpp2" (OuterVolumeSpecName: "kube-api-access-dzpp2") pod "ee76bb43-a079-4631-aace-ba93a4e04e4a" (UID: "ee76bb43-a079-4631-aace-ba93a4e04e4a"). InnerVolumeSpecName "kube-api-access-dzpp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.517488 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee76bb43-a079-4631-aace-ba93a4e04e4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee76bb43-a079-4631-aace-ba93a4e04e4a" (UID: "ee76bb43-a079-4631-aace-ba93a4e04e4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.522752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtnq\" (UniqueName: \"kubernetes.io/projected/17ed58ca-3d70-466f-95c9-db0b00258e6f-kube-api-access-nvtnq\") pod \"controller-manager-697b597b79-pzzs5\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606085 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-config\") pod \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606161 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-client-ca\") pod \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606242 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f59d96-5524-4b11-ac3b-b2634f94b6f7-serving-cert\") pod \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606264 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdhv4\" (UniqueName: \"kubernetes.io/projected/20f59d96-5524-4b11-ac3b-b2634f94b6f7-kube-api-access-fdhv4\") pod \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\" (UID: \"20f59d96-5524-4b11-ac3b-b2634f94b6f7\") " Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606665 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606682 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzpp2\" (UniqueName: \"kubernetes.io/projected/ee76bb43-a079-4631-aace-ba93a4e04e4a-kube-api-access-dzpp2\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606694 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee76bb43-a079-4631-aace-ba93a4e04e4a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.606704 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee76bb43-a079-4631-aace-ba93a4e04e4a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.607138 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-config" (OuterVolumeSpecName: "config") pod "20f59d96-5524-4b11-ac3b-b2634f94b6f7" (UID: "20f59d96-5524-4b11-ac3b-b2634f94b6f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.607126 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "20f59d96-5524-4b11-ac3b-b2634f94b6f7" (UID: "20f59d96-5524-4b11-ac3b-b2634f94b6f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.610632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f59d96-5524-4b11-ac3b-b2634f94b6f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20f59d96-5524-4b11-ac3b-b2634f94b6f7" (UID: "20f59d96-5524-4b11-ac3b-b2634f94b6f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.612150 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f59d96-5524-4b11-ac3b-b2634f94b6f7-kube-api-access-fdhv4" (OuterVolumeSpecName: "kube-api-access-fdhv4") pod "20f59d96-5524-4b11-ac3b-b2634f94b6f7" (UID: "20f59d96-5524-4b11-ac3b-b2634f94b6f7"). InnerVolumeSpecName "kube-api-access-fdhv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.704261 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.708014 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.708061 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20f59d96-5524-4b11-ac3b-b2634f94b6f7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.708072 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20f59d96-5524-4b11-ac3b-b2634f94b6f7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.708082 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdhv4\" (UniqueName: \"kubernetes.io/projected/20f59d96-5524-4b11-ac3b-b2634f94b6f7-kube-api-access-fdhv4\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:12 crc kubenswrapper[4858]: I1205 14:03:12.900443 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-697b597b79-pzzs5"] Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.071136 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" event={"ID":"17ed58ca-3d70-466f-95c9-db0b00258e6f","Type":"ContainerStarted","Data":"4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d"} Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.071196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" event={"ID":"17ed58ca-3d70-466f-95c9-db0b00258e6f","Type":"ContainerStarted","Data":"278ba43eccc79a541c66b299ee1d0645687af729f456de6892a86aed339dc8c4"} Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.072956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.075476 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" event={"ID":"ee76bb43-a079-4631-aace-ba93a4e04e4a","Type":"ContainerDied","Data":"d8d183dafc2eddc607bbee74dee04fc054eae9e3a8eb88abd726e00cf3948b04"} Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.075533 4858 scope.go:117] "RemoveContainer" containerID="8becbb2396401ed0934e50dc005e80887958a9d2ea3aa1da13e5ae8d6958016d" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.075713 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wfbnh" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.078994 4858 patch_prober.go:28] interesting pod/controller-manager-697b597b79-pzzs5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.079047 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" podUID="17ed58ca-3d70-466f-95c9-db0b00258e6f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.081802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" event={"ID":"20f59d96-5524-4b11-ac3b-b2634f94b6f7","Type":"ContainerDied","Data":"4625a5c9edbda55d4e196514fa721f238dc55bb648816c2b18164cf59969f374"} Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.081959 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.094490 4858 scope.go:117] "RemoveContainer" containerID="4773bc3f859946bfbac6df391c21116ff32b4b19a5f674f13371c0fd7523ba7e" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.098442 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" podStartSLOduration=3.098423439 podStartE2EDuration="3.098423439s" podCreationTimestamp="2025-12-05 14:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:03:13.094795228 +0000 UTC m=+401.642393367" watchObservedRunningTime="2025-12-05 14:03:13.098423439 +0000 UTC m=+401.646021578" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.122492 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfbnh"] Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.125532 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wfbnh"] Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.147657 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn"] Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.152854 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-r2zjn"] Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.908693 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" path="/var/lib/kubelet/pods/20f59d96-5524-4b11-ac3b-b2634f94b6f7/volumes" Dec 05 14:03:13 crc kubenswrapper[4858]: I1205 14:03:13.909622 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee76bb43-a079-4631-aace-ba93a4e04e4a" path="/var/lib/kubelet/pods/ee76bb43-a079-4631-aace-ba93a4e04e4a/volumes" Dec 05 14:03:14 crc kubenswrapper[4858]: I1205 14:03:14.093888 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:14 crc kubenswrapper[4858]: I1205 14:03:14.760237 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:03:14 crc kubenswrapper[4858]: I1205 14:03:14.760283 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.020396 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-697b597b79-pzzs5"] Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.069295 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq"] Dec 05 14:03:15 crc kubenswrapper[4858]: E1205 14:03:15.069570 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.069596 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.069722 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f59d96-5524-4b11-ac3b-b2634f94b6f7" containerName="route-controller-manager" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.070258 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.073054 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.073289 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.073671 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.073960 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.075520 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.077651 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.083518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq"] Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.134454 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-config\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.134808 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da4d8381-fffa-49c1-828d-4dfe44c82180-serving-cert\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.134859 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4gg4\" (UniqueName: \"kubernetes.io/projected/da4d8381-fffa-49c1-828d-4dfe44c82180-kube-api-access-l4gg4\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.134879 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-client-ca\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.180354 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq"] Dec 05 14:03:15 crc kubenswrapper[4858]: E1205 14:03:15.180701 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-l4gg4 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" podUID="da4d8381-fffa-49c1-828d-4dfe44c82180" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.235387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-client-ca\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.235465 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-config\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.235500 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da4d8381-fffa-49c1-828d-4dfe44c82180-serving-cert\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.235527 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4gg4\" (UniqueName: \"kubernetes.io/projected/da4d8381-fffa-49c1-828d-4dfe44c82180-kube-api-access-l4gg4\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.236722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-client-ca\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.236961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-config\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.244766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da4d8381-fffa-49c1-828d-4dfe44c82180-serving-cert\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:15 crc kubenswrapper[4858]: I1205 14:03:15.261893 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4gg4\" (UniqueName: \"kubernetes.io/projected/da4d8381-fffa-49c1-828d-4dfe44c82180-kube-api-access-l4gg4\") pod \"route-controller-manager-759f757447-hg4cq\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.101782 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.101785 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" podUID="17ed58ca-3d70-466f-95c9-db0b00258e6f" containerName="controller-manager" containerID="cri-o://4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d" gracePeriod=30 Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.122954 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:16 crc kubenswrapper[4858]: E1205 14:03:16.142606 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17ed58ca_3d70_466f_95c9_db0b00258e6f.slice/crio-4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.247485 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-config\") pod \"da4d8381-fffa-49c1-828d-4dfe44c82180\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.247577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da4d8381-fffa-49c1-828d-4dfe44c82180-serving-cert\") pod \"da4d8381-fffa-49c1-828d-4dfe44c82180\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.247675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4gg4\" (UniqueName: \"kubernetes.io/projected/da4d8381-fffa-49c1-828d-4dfe44c82180-kube-api-access-l4gg4\") pod \"da4d8381-fffa-49c1-828d-4dfe44c82180\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.247713 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-client-ca\") pod \"da4d8381-fffa-49c1-828d-4dfe44c82180\" (UID: \"da4d8381-fffa-49c1-828d-4dfe44c82180\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.248255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-client-ca" (OuterVolumeSpecName: "client-ca") pod "da4d8381-fffa-49c1-828d-4dfe44c82180" (UID: "da4d8381-fffa-49c1-828d-4dfe44c82180"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.248655 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-config" (OuterVolumeSpecName: "config") pod "da4d8381-fffa-49c1-828d-4dfe44c82180" (UID: "da4d8381-fffa-49c1-828d-4dfe44c82180"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.263031 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da4d8381-fffa-49c1-828d-4dfe44c82180-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da4d8381-fffa-49c1-828d-4dfe44c82180" (UID: "da4d8381-fffa-49c1-828d-4dfe44c82180"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.264805 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4d8381-fffa-49c1-828d-4dfe44c82180-kube-api-access-l4gg4" (OuterVolumeSpecName: "kube-api-access-l4gg4") pod "da4d8381-fffa-49c1-828d-4dfe44c82180" (UID: "da4d8381-fffa-49c1-828d-4dfe44c82180"). InnerVolumeSpecName "kube-api-access-l4gg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.349717 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4gg4\" (UniqueName: \"kubernetes.io/projected/da4d8381-fffa-49c1-828d-4dfe44c82180-kube-api-access-l4gg4\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.350070 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.350089 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d8381-fffa-49c1-828d-4dfe44c82180-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.350103 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da4d8381-fffa-49c1-828d-4dfe44c82180-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.565727 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.595711 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-w8cxt"] Dec 05 14:03:16 crc kubenswrapper[4858]: E1205 14:03:16.596131 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ed58ca-3d70-466f-95c9-db0b00258e6f" containerName="controller-manager" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.596155 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ed58ca-3d70-466f-95c9-db0b00258e6f" containerName="controller-manager" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.596663 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ed58ca-3d70-466f-95c9-db0b00258e6f" containerName="controller-manager" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.597261 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.614864 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-w8cxt"] Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.653538 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-client-ca\") pod \"17ed58ca-3d70-466f-95c9-db0b00258e6f\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.654037 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f99022d-12bb-435b-9577-7ebdd0e7b450-serving-cert\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.654109 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-client-ca\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.654146 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9qpk\" (UniqueName: \"kubernetes.io/projected/6f99022d-12bb-435b-9577-7ebdd0e7b450-kube-api-access-b9qpk\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.654178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-proxy-ca-bundles\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.654272 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-config\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.654381 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "17ed58ca-3d70-466f-95c9-db0b00258e6f" (UID: "17ed58ca-3d70-466f-95c9-db0b00258e6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.755562 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-config\") pod \"17ed58ca-3d70-466f-95c9-db0b00258e6f\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.755721 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17ed58ca-3d70-466f-95c9-db0b00258e6f-serving-cert\") pod \"17ed58ca-3d70-466f-95c9-db0b00258e6f\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.755798 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvtnq\" (UniqueName: \"kubernetes.io/projected/17ed58ca-3d70-466f-95c9-db0b00258e6f-kube-api-access-nvtnq\") pod \"17ed58ca-3d70-466f-95c9-db0b00258e6f\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.755970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-proxy-ca-bundles\") pod \"17ed58ca-3d70-466f-95c9-db0b00258e6f\" (UID: \"17ed58ca-3d70-466f-95c9-db0b00258e6f\") " Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756185 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f99022d-12bb-435b-9577-7ebdd0e7b450-serving-cert\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-client-ca\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9qpk\" (UniqueName: \"kubernetes.io/projected/6f99022d-12bb-435b-9577-7ebdd0e7b450-kube-api-access-b9qpk\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756306 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-proxy-ca-bundles\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756353 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-config\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756412 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.756437 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-config" (OuterVolumeSpecName: "config") pod "17ed58ca-3d70-466f-95c9-db0b00258e6f" (UID: "17ed58ca-3d70-466f-95c9-db0b00258e6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.758076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-config\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.758714 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-client-ca\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.759098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "17ed58ca-3d70-466f-95c9-db0b00258e6f" (UID: "17ed58ca-3d70-466f-95c9-db0b00258e6f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.759759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-proxy-ca-bundles\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.762216 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f99022d-12bb-435b-9577-7ebdd0e7b450-serving-cert\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.775047 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ed58ca-3d70-466f-95c9-db0b00258e6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17ed58ca-3d70-466f-95c9-db0b00258e6f" (UID: "17ed58ca-3d70-466f-95c9-db0b00258e6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.775107 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ed58ca-3d70-466f-95c9-db0b00258e6f-kube-api-access-nvtnq" (OuterVolumeSpecName: "kube-api-access-nvtnq") pod "17ed58ca-3d70-466f-95c9-db0b00258e6f" (UID: "17ed58ca-3d70-466f-95c9-db0b00258e6f"). InnerVolumeSpecName "kube-api-access-nvtnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.779989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9qpk\" (UniqueName: \"kubernetes.io/projected/6f99022d-12bb-435b-9577-7ebdd0e7b450-kube-api-access-b9qpk\") pod \"controller-manager-6487bff6c8-w8cxt\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.857202 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.857254 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17ed58ca-3d70-466f-95c9-db0b00258e6f-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.857265 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17ed58ca-3d70-466f-95c9-db0b00258e6f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.857275 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvtnq\" (UniqueName: \"kubernetes.io/projected/17ed58ca-3d70-466f-95c9-db0b00258e6f-kube-api-access-nvtnq\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:16 crc kubenswrapper[4858]: I1205 14:03:16.911543 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.102783 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-w8cxt"] Dec 05 14:03:17 crc kubenswrapper[4858]: W1205 14:03:17.106423 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f99022d_12bb_435b_9577_7ebdd0e7b450.slice/crio-dabce34a13d2cdb40e268e1dd52317a1afd9dd3dd04d33ec666a17cbcdaf8860 WatchSource:0}: Error finding container dabce34a13d2cdb40e268e1dd52317a1afd9dd3dd04d33ec666a17cbcdaf8860: Status 404 returned error can't find the container with id dabce34a13d2cdb40e268e1dd52317a1afd9dd3dd04d33ec666a17cbcdaf8860 Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.109615 4858 generic.go:334] "Generic (PLEG): container finished" podID="17ed58ca-3d70-466f-95c9-db0b00258e6f" containerID="4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d" exitCode=0 Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.109683 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.109961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" event={"ID":"17ed58ca-3d70-466f-95c9-db0b00258e6f","Type":"ContainerDied","Data":"4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d"} Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.110008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" event={"ID":"17ed58ca-3d70-466f-95c9-db0b00258e6f","Type":"ContainerDied","Data":"278ba43eccc79a541c66b299ee1d0645687af729f456de6892a86aed339dc8c4"} Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.110034 4858 scope.go:117] "RemoveContainer" containerID="4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.110093 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-697b597b79-pzzs5" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.126565 4858 scope.go:117] "RemoveContainer" containerID="4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d" Dec 05 14:03:17 crc kubenswrapper[4858]: E1205 14:03:17.128270 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d\": container with ID starting with 4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d not found: ID does not exist" containerID="4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.128303 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d"} err="failed to get container status \"4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d\": rpc error: code = NotFound desc = could not find container \"4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d\": container with ID starting with 4d79f701c70c0ce840d205c4c25e3e1a525eee05e9373551c8b2bb082a673b5d not found: ID does not exist" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.161419 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq"] Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.164665 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-hg4cq"] Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.168251 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-697b597b79-pzzs5"] Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.171676 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-697b597b79-pzzs5"] Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.908580 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ed58ca-3d70-466f-95c9-db0b00258e6f" path="/var/lib/kubelet/pods/17ed58ca-3d70-466f-95c9-db0b00258e6f/volumes" Dec 05 14:03:17 crc kubenswrapper[4858]: I1205 14:03:17.909675 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4d8381-fffa-49c1-828d-4dfe44c82180" path="/var/lib/kubelet/pods/da4d8381-fffa-49c1-828d-4dfe44c82180/volumes" Dec 05 14:03:18 crc kubenswrapper[4858]: I1205 14:03:18.117167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" event={"ID":"6f99022d-12bb-435b-9577-7ebdd0e7b450","Type":"ContainerStarted","Data":"e170047423245bccfb8efa7ab715c1ce0ad5e1af4d9028ef831c3e666b0e47c7"} Dec 05 14:03:18 crc kubenswrapper[4858]: I1205 14:03:18.117210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" event={"ID":"6f99022d-12bb-435b-9577-7ebdd0e7b450","Type":"ContainerStarted","Data":"dabce34a13d2cdb40e268e1dd52317a1afd9dd3dd04d33ec666a17cbcdaf8860"} Dec 05 14:03:18 crc kubenswrapper[4858]: I1205 14:03:18.118093 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:18 crc kubenswrapper[4858]: I1205 14:03:18.124721 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:18 crc kubenswrapper[4858]: I1205 14:03:18.136696 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" podStartSLOduration=3.136675399 podStartE2EDuration="3.136675399s" podCreationTimestamp="2025-12-05 14:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:03:18.132657527 +0000 UTC m=+406.680255666" watchObservedRunningTime="2025-12-05 14:03:18.136675399 +0000 UTC m=+406.684273538" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.333854 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl"] Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.334707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.337129 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.337466 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.337765 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.338017 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.339061 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.339248 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.342817 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl"] Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.420161 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-serving-cert\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.420225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-config\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.420251 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-client-ca\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.420326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnpf6\" (UniqueName: \"kubernetes.io/projected/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-kube-api-access-cnpf6\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.522421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-serving-cert\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.522491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-config\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.522511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-client-ca\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.522571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnpf6\" (UniqueName: \"kubernetes.io/projected/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-kube-api-access-cnpf6\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.523691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-client-ca\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.523897 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-config\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.534621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-serving-cert\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.541016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnpf6\" (UniqueName: \"kubernetes.io/projected/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-kube-api-access-cnpf6\") pod \"route-controller-manager-7f8ccc4d89-5msjl\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:19 crc kubenswrapper[4858]: I1205 14:03:19.655060 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:20 crc kubenswrapper[4858]: I1205 14:03:20.148148 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl"] Dec 05 14:03:20 crc kubenswrapper[4858]: W1205 14:03:20.155986 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceed7431_a3e7_4d1b_9135_1475d3aba4c5.slice/crio-361a4da9760c48299ba4ffb60f46e239967b87a564e512b87e12ac951fb2befe WatchSource:0}: Error finding container 361a4da9760c48299ba4ffb60f46e239967b87a564e512b87e12ac951fb2befe: Status 404 returned error can't find the container with id 361a4da9760c48299ba4ffb60f46e239967b87a564e512b87e12ac951fb2befe Dec 05 14:03:21 crc kubenswrapper[4858]: I1205 14:03:21.136276 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" event={"ID":"ceed7431-a3e7-4d1b-9135-1475d3aba4c5","Type":"ContainerStarted","Data":"06d9636965a58b48dd471fe94a69e223fe382c38a56b59c53aea332f2b19aaec"} Dec 05 14:03:21 crc kubenswrapper[4858]: I1205 14:03:21.136590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" event={"ID":"ceed7431-a3e7-4d1b-9135-1475d3aba4c5","Type":"ContainerStarted","Data":"361a4da9760c48299ba4ffb60f46e239967b87a564e512b87e12ac951fb2befe"} Dec 05 14:03:21 crc kubenswrapper[4858]: I1205 14:03:21.136632 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:21 crc kubenswrapper[4858]: I1205 14:03:21.161994 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" podStartSLOduration=6.16196936 podStartE2EDuration="6.16196936s" podCreationTimestamp="2025-12-05 14:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:03:21.15661581 +0000 UTC m=+409.704213949" watchObservedRunningTime="2025-12-05 14:03:21.16196936 +0000 UTC m=+409.709567499" Dec 05 14:03:21 crc kubenswrapper[4858]: I1205 14:03:21.304910 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:30 crc kubenswrapper[4858]: I1205 14:03:30.906771 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl"] Dec 05 14:03:30 crc kubenswrapper[4858]: I1205 14:03:30.907969 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" podUID="ceed7431-a3e7-4d1b-9135-1475d3aba4c5" containerName="route-controller-manager" containerID="cri-o://06d9636965a58b48dd471fe94a69e223fe382c38a56b59c53aea332f2b19aaec" gracePeriod=30 Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.186501 4858 generic.go:334] "Generic (PLEG): container finished" podID="ceed7431-a3e7-4d1b-9135-1475d3aba4c5" containerID="06d9636965a58b48dd471fe94a69e223fe382c38a56b59c53aea332f2b19aaec" exitCode=0 Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.186579 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" event={"ID":"ceed7431-a3e7-4d1b-9135-1475d3aba4c5","Type":"ContainerDied","Data":"06d9636965a58b48dd471fe94a69e223fe382c38a56b59c53aea332f2b19aaec"} Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.354440 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.515178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnpf6\" (UniqueName: \"kubernetes.io/projected/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-kube-api-access-cnpf6\") pod \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.515600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-config\") pod \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.515660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-client-ca\") pod \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.515689 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-serving-cert\") pod \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\" (UID: \"ceed7431-a3e7-4d1b-9135-1475d3aba4c5\") " Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.516254 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-config" (OuterVolumeSpecName: "config") pod "ceed7431-a3e7-4d1b-9135-1475d3aba4c5" (UID: "ceed7431-a3e7-4d1b-9135-1475d3aba4c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.516247 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "ceed7431-a3e7-4d1b-9135-1475d3aba4c5" (UID: "ceed7431-a3e7-4d1b-9135-1475d3aba4c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.521191 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ceed7431-a3e7-4d1b-9135-1475d3aba4c5" (UID: "ceed7431-a3e7-4d1b-9135-1475d3aba4c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.521204 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-kube-api-access-cnpf6" (OuterVolumeSpecName: "kube-api-access-cnpf6") pod "ceed7431-a3e7-4d1b-9135-1475d3aba4c5" (UID: "ceed7431-a3e7-4d1b-9135-1475d3aba4c5"). InnerVolumeSpecName "kube-api-access-cnpf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.617658 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.617705 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.617714 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnpf6\" (UniqueName: \"kubernetes.io/projected/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-kube-api-access-cnpf6\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:31 crc kubenswrapper[4858]: I1205 14:03:31.617725 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceed7431-a3e7-4d1b-9135-1475d3aba4c5-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.193580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" event={"ID":"ceed7431-a3e7-4d1b-9135-1475d3aba4c5","Type":"ContainerDied","Data":"361a4da9760c48299ba4ffb60f46e239967b87a564e512b87e12ac951fb2befe"} Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.193623 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.193666 4858 scope.go:117] "RemoveContainer" containerID="06d9636965a58b48dd471fe94a69e223fe382c38a56b59c53aea332f2b19aaec" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.228961 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl"] Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.238997 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f8ccc4d89-5msjl"] Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.341562 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn"] Dec 05 14:03:32 crc kubenswrapper[4858]: E1205 14:03:32.342001 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceed7431-a3e7-4d1b-9135-1475d3aba4c5" containerName="route-controller-manager" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.342022 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceed7431-a3e7-4d1b-9135-1475d3aba4c5" containerName="route-controller-manager" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.342154 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceed7431-a3e7-4d1b-9135-1475d3aba4c5" containerName="route-controller-manager" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.342844 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.347317 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.347547 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.347631 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.347800 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.349381 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.349549 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.354354 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn"] Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.532047 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e76c9b7-a280-482b-bd9f-507fd2950dc6-client-ca\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.532112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74d4h\" (UniqueName: \"kubernetes.io/projected/2e76c9b7-a280-482b-bd9f-507fd2950dc6-kube-api-access-74d4h\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.532396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e76c9b7-a280-482b-bd9f-507fd2950dc6-config\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.532436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e76c9b7-a280-482b-bd9f-507fd2950dc6-serving-cert\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.633716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e76c9b7-a280-482b-bd9f-507fd2950dc6-config\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.634050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e76c9b7-a280-482b-bd9f-507fd2950dc6-serving-cert\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.634363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e76c9b7-a280-482b-bd9f-507fd2950dc6-client-ca\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.634444 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74d4h\" (UniqueName: \"kubernetes.io/projected/2e76c9b7-a280-482b-bd9f-507fd2950dc6-kube-api-access-74d4h\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.635962 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e76c9b7-a280-482b-bd9f-507fd2950dc6-client-ca\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.636381 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e76c9b7-a280-482b-bd9f-507fd2950dc6-config\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.641675 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e76c9b7-a280-482b-bd9f-507fd2950dc6-serving-cert\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.660173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74d4h\" (UniqueName: \"kubernetes.io/projected/2e76c9b7-a280-482b-bd9f-507fd2950dc6-kube-api-access-74d4h\") pod \"route-controller-manager-759f757447-m6wpn\" (UID: \"2e76c9b7-a280-482b-bd9f-507fd2950dc6\") " pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:32 crc kubenswrapper[4858]: I1205 14:03:32.956814 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:33 crc kubenswrapper[4858]: I1205 14:03:33.429952 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn"] Dec 05 14:03:33 crc kubenswrapper[4858]: I1205 14:03:33.908721 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceed7431-a3e7-4d1b-9135-1475d3aba4c5" path="/var/lib/kubelet/pods/ceed7431-a3e7-4d1b-9135-1475d3aba4c5/volumes" Dec 05 14:03:34 crc kubenswrapper[4858]: I1205 14:03:34.208488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" event={"ID":"2e76c9b7-a280-482b-bd9f-507fd2950dc6","Type":"ContainerStarted","Data":"efb70490dd267273e72583c8b491caae26133a4656c69b18d5c1831605efa39b"} Dec 05 14:03:34 crc kubenswrapper[4858]: I1205 14:03:34.208553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" event={"ID":"2e76c9b7-a280-482b-bd9f-507fd2950dc6","Type":"ContainerStarted","Data":"fd422d4f8840147e4c448599b113a66f9c75790c90e6a479dc7f506e0f77d3dd"} Dec 05 14:03:34 crc kubenswrapper[4858]: I1205 14:03:34.208779 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:34 crc kubenswrapper[4858]: I1205 14:03:34.458887 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" Dec 05 14:03:34 crc kubenswrapper[4858]: I1205 14:03:34.484229 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" podStartSLOduration=4.484210214 podStartE2EDuration="4.484210214s" podCreationTimestamp="2025-12-05 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:03:34.23298545 +0000 UTC m=+422.780583589" watchObservedRunningTime="2025-12-05 14:03:34.484210214 +0000 UTC m=+423.031808343" Dec 05 14:03:44 crc kubenswrapper[4858]: I1205 14:03:44.759965 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:03:44 crc kubenswrapper[4858]: I1205 14:03:44.760440 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:03:44 crc kubenswrapper[4858]: I1205 14:03:44.760493 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:03:44 crc kubenswrapper[4858]: I1205 14:03:44.761362 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ab1fc1ade15987d254249f652eeb63b38a39486edb0297f61ed8eaf801d6fa5"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:03:44 crc kubenswrapper[4858]: I1205 14:03:44.761430 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://3ab1fc1ade15987d254249f652eeb63b38a39486edb0297f61ed8eaf801d6fa5" gracePeriod=600 Dec 05 14:03:45 crc kubenswrapper[4858]: I1205 14:03:45.262587 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="3ab1fc1ade15987d254249f652eeb63b38a39486edb0297f61ed8eaf801d6fa5" exitCode=0 Dec 05 14:03:45 crc kubenswrapper[4858]: I1205 14:03:45.262657 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"3ab1fc1ade15987d254249f652eeb63b38a39486edb0297f61ed8eaf801d6fa5"} Dec 05 14:03:45 crc kubenswrapper[4858]: I1205 14:03:45.263002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"b223ebad30a2f7caa7c0f9f256f2d9437e338680d956fb743d7b1bcdf70d4a7c"} Dec 05 14:03:45 crc kubenswrapper[4858]: I1205 14:03:45.263025 4858 scope.go:117] "RemoveContainer" containerID="0480461e4167a0b44070349d3e52671a4352080822c4603e91cca15dcdbe9faf" Dec 05 14:03:50 crc kubenswrapper[4858]: I1205 14:03:50.883213 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-w8cxt"] Dec 05 14:03:50 crc kubenswrapper[4858]: I1205 14:03:50.884029 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" podUID="6f99022d-12bb-435b-9577-7ebdd0e7b450" containerName="controller-manager" containerID="cri-o://e170047423245bccfb8efa7ab715c1ce0ad5e1af4d9028ef831c3e666b0e47c7" gracePeriod=30 Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.298040 4858 generic.go:334] "Generic (PLEG): container finished" podID="6f99022d-12bb-435b-9577-7ebdd0e7b450" containerID="e170047423245bccfb8efa7ab715c1ce0ad5e1af4d9028ef831c3e666b0e47c7" exitCode=0 Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.298305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" event={"ID":"6f99022d-12bb-435b-9577-7ebdd0e7b450","Type":"ContainerDied","Data":"e170047423245bccfb8efa7ab715c1ce0ad5e1af4d9028ef831c3e666b0e47c7"} Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.351774 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.381624 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-proxy-ca-bundles\") pod \"6f99022d-12bb-435b-9577-7ebdd0e7b450\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.382395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6f99022d-12bb-435b-9577-7ebdd0e7b450" (UID: "6f99022d-12bb-435b-9577-7ebdd0e7b450"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.482484 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f99022d-12bb-435b-9577-7ebdd0e7b450-serving-cert\") pod \"6f99022d-12bb-435b-9577-7ebdd0e7b450\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.482598 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9qpk\" (UniqueName: \"kubernetes.io/projected/6f99022d-12bb-435b-9577-7ebdd0e7b450-kube-api-access-b9qpk\") pod \"6f99022d-12bb-435b-9577-7ebdd0e7b450\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.482637 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-config\") pod \"6f99022d-12bb-435b-9577-7ebdd0e7b450\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.482695 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-client-ca\") pod \"6f99022d-12bb-435b-9577-7ebdd0e7b450\" (UID: \"6f99022d-12bb-435b-9577-7ebdd0e7b450\") " Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.483383 4858 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.483462 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-client-ca" (OuterVolumeSpecName: "client-ca") pod "6f99022d-12bb-435b-9577-7ebdd0e7b450" (UID: "6f99022d-12bb-435b-9577-7ebdd0e7b450"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.484218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-config" (OuterVolumeSpecName: "config") pod "6f99022d-12bb-435b-9577-7ebdd0e7b450" (UID: "6f99022d-12bb-435b-9577-7ebdd0e7b450"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.488529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f99022d-12bb-435b-9577-7ebdd0e7b450-kube-api-access-b9qpk" (OuterVolumeSpecName: "kube-api-access-b9qpk") pod "6f99022d-12bb-435b-9577-7ebdd0e7b450" (UID: "6f99022d-12bb-435b-9577-7ebdd0e7b450"). InnerVolumeSpecName "kube-api-access-b9qpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.491509 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f99022d-12bb-435b-9577-7ebdd0e7b450-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6f99022d-12bb-435b-9577-7ebdd0e7b450" (UID: "6f99022d-12bb-435b-9577-7ebdd0e7b450"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.584969 4858 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.585008 4858 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f99022d-12bb-435b-9577-7ebdd0e7b450-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.585021 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9qpk\" (UniqueName: \"kubernetes.io/projected/6f99022d-12bb-435b-9577-7ebdd0e7b450-kube-api-access-b9qpk\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:51 crc kubenswrapper[4858]: I1205 14:03:51.585032 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f99022d-12bb-435b-9577-7ebdd0e7b450-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.304513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" event={"ID":"6f99022d-12bb-435b-9577-7ebdd0e7b450","Type":"ContainerDied","Data":"dabce34a13d2cdb40e268e1dd52317a1afd9dd3dd04d33ec666a17cbcdaf8860"} Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.304549 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6487bff6c8-w8cxt" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.304583 4858 scope.go:117] "RemoveContainer" containerID="e170047423245bccfb8efa7ab715c1ce0ad5e1af4d9028ef831c3e666b0e47c7" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.321907 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-w8cxt"] Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.325250 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6487bff6c8-w8cxt"] Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.357414 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74b47c9b9-pdvnc"] Dec 05 14:03:52 crc kubenswrapper[4858]: E1205 14:03:52.357746 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f99022d-12bb-435b-9577-7ebdd0e7b450" containerName="controller-manager" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.357770 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f99022d-12bb-435b-9577-7ebdd0e7b450" containerName="controller-manager" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.358042 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f99022d-12bb-435b-9577-7ebdd0e7b450" containerName="controller-manager" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.359391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.365073 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.365812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.365960 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.366141 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.366425 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.366788 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b47c9b9-pdvnc"] Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.366812 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.372277 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.495854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7fa59-6622-4740-aa51-89d994381fe4-serving-cert\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.495914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-client-ca\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.496143 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-proxy-ca-bundles\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.496181 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-config\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.496200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsvw\" (UniqueName: \"kubernetes.io/projected/34b7fa59-6622-4740-aa51-89d994381fe4-kube-api-access-cnsvw\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.598273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7fa59-6622-4740-aa51-89d994381fe4-serving-cert\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.598335 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-client-ca\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.598385 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-proxy-ca-bundles\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.598436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-config\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.598460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnsvw\" (UniqueName: \"kubernetes.io/projected/34b7fa59-6622-4740-aa51-89d994381fe4-kube-api-access-cnsvw\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.599577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-client-ca\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.599859 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-proxy-ca-bundles\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.600312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7fa59-6622-4740-aa51-89d994381fe4-config\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.603768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7fa59-6622-4740-aa51-89d994381fe4-serving-cert\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.615813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnsvw\" (UniqueName: \"kubernetes.io/projected/34b7fa59-6622-4740-aa51-89d994381fe4-kube-api-access-cnsvw\") pod \"controller-manager-74b47c9b9-pdvnc\" (UID: \"34b7fa59-6622-4740-aa51-89d994381fe4\") " pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.685147 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:52 crc kubenswrapper[4858]: I1205 14:03:52.950467 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74b47c9b9-pdvnc"] Dec 05 14:03:53 crc kubenswrapper[4858]: I1205 14:03:53.312027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" event={"ID":"34b7fa59-6622-4740-aa51-89d994381fe4","Type":"ContainerStarted","Data":"b84941f16f6e26006a57c22963a8cddcea04b5d50a2126745a9ef380e423b984"} Dec 05 14:03:53 crc kubenswrapper[4858]: I1205 14:03:53.312327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" event={"ID":"34b7fa59-6622-4740-aa51-89d994381fe4","Type":"ContainerStarted","Data":"821c08e424b530622d6cf0d23080300a0fa67085bf5e6c6d69a2f3977ea3e204"} Dec 05 14:03:53 crc kubenswrapper[4858]: I1205 14:03:53.312349 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:53 crc kubenswrapper[4858]: I1205 14:03:53.326301 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" Dec 05 14:03:53 crc kubenswrapper[4858]: I1205 14:03:53.363895 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" podStartSLOduration=3.363877953 podStartE2EDuration="3.363877953s" podCreationTimestamp="2025-12-05 14:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:03:53.3608776 +0000 UTC m=+441.908475749" watchObservedRunningTime="2025-12-05 14:03:53.363877953 +0000 UTC m=+441.911476092" Dec 05 14:03:53 crc kubenswrapper[4858]: I1205 14:03:53.911567 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f99022d-12bb-435b-9577-7ebdd0e7b450" path="/var/lib/kubelet/pods/6f99022d-12bb-435b-9577-7ebdd0e7b450/volumes" Dec 05 14:06:14 crc kubenswrapper[4858]: I1205 14:06:14.760028 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:06:14 crc kubenswrapper[4858]: I1205 14:06:14.760502 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.193548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tpcgh"] Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.194896 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" Dec 05 14:06:40 crc kubenswrapper[4858]: W1205 14:06:40.199200 4858 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-ffb9k": failed to list *v1.Secret: secrets "cert-manager-cainjector-dockercfg-ffb9k" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Dec 05 14:06:40 crc kubenswrapper[4858]: E1205 14:06:40.199241 4858 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-ffb9k\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-cainjector-dockercfg-ffb9k\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:06:40 crc kubenswrapper[4858]: W1205 14:06:40.199473 4858 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Dec 05 14:06:40 crc kubenswrapper[4858]: E1205 14:06:40.199928 4858 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:06:40 crc kubenswrapper[4858]: W1205 14:06:40.200074 4858 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Dec 05 14:06:40 crc kubenswrapper[4858]: E1205 14:06:40.200087 4858 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.210221 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qg6fx"] Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.211013 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-qg6fx" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.212409 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jfkzb" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.218074 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-5mx92"] Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.218795 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.221497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kg77n" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.239777 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbn5\" (UniqueName: \"kubernetes.io/projected/473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7-kube-api-access-2lbn5\") pod \"cert-manager-cainjector-7f985d654d-tpcgh\" (UID: \"473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.239934 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsmf\" (UniqueName: \"kubernetes.io/projected/f371340e-c0a7-4cce-9a93-aee21a8c39f1-kube-api-access-pbsmf\") pod \"cert-manager-5b446d88c5-qg6fx\" (UID: \"f371340e-c0a7-4cce-9a93-aee21a8c39f1\") " pod="cert-manager/cert-manager-5b446d88c5-qg6fx" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.239976 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55tj5\" (UniqueName: \"kubernetes.io/projected/cb76164b-d338-4395-af71-e6dd098c165f-kube-api-access-55tj5\") pod \"cert-manager-webhook-5655c58dd6-5mx92\" (UID: \"cb76164b-d338-4395-af71-e6dd098c165f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.245366 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tpcgh"] Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.250965 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qg6fx"] Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.254789 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-5mx92"] Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.341170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsmf\" (UniqueName: \"kubernetes.io/projected/f371340e-c0a7-4cce-9a93-aee21a8c39f1-kube-api-access-pbsmf\") pod \"cert-manager-5b446d88c5-qg6fx\" (UID: \"f371340e-c0a7-4cce-9a93-aee21a8c39f1\") " pod="cert-manager/cert-manager-5b446d88c5-qg6fx" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.341245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55tj5\" (UniqueName: \"kubernetes.io/projected/cb76164b-d338-4395-af71-e6dd098c165f-kube-api-access-55tj5\") pod \"cert-manager-webhook-5655c58dd6-5mx92\" (UID: \"cb76164b-d338-4395-af71-e6dd098c165f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:40 crc kubenswrapper[4858]: I1205 14:06:40.341299 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lbn5\" (UniqueName: \"kubernetes.io/projected/473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7-kube-api-access-2lbn5\") pod \"cert-manager-cainjector-7f985d654d-tpcgh\" (UID: \"473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.354323 4858 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.355402 4858 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.356514 4858 projected.go:288] Couldn't get configMap cert-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: I1205 14:06:41.448381 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 05 14:06:41 crc kubenswrapper[4858]: I1205 14:06:41.597426 4858 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ffb9k" Dec 05 14:06:41 crc kubenswrapper[4858]: I1205 14:06:41.767005 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.774638 4858 projected.go:194] Error preparing data for projected volume kube-api-access-2lbn5 for pod cert-manager/cert-manager-cainjector-7f985d654d-tpcgh: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.774730 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7-kube-api-access-2lbn5 podName:473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7 nodeName:}" failed. No retries permitted until 2025-12-05 14:06:42.27470908 +0000 UTC m=+610.822307219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2lbn5" (UniqueName: "kubernetes.io/projected/473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7-kube-api-access-2lbn5") pod "cert-manager-cainjector-7f985d654d-tpcgh" (UID: "473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7") : failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.775888 4858 projected.go:194] Error preparing data for projected volume kube-api-access-pbsmf for pod cert-manager/cert-manager-5b446d88c5-qg6fx: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.775970 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f371340e-c0a7-4cce-9a93-aee21a8c39f1-kube-api-access-pbsmf podName:f371340e-c0a7-4cce-9a93-aee21a8c39f1 nodeName:}" failed. No retries permitted until 2025-12-05 14:06:42.275950524 +0000 UTC m=+610.823548663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pbsmf" (UniqueName: "kubernetes.io/projected/f371340e-c0a7-4cce-9a93-aee21a8c39f1-kube-api-access-pbsmf") pod "cert-manager-5b446d88c5-qg6fx" (UID: "f371340e-c0a7-4cce-9a93-aee21a8c39f1") : failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.777177 4858 projected.go:194] Error preparing data for projected volume kube-api-access-55tj5 for pod cert-manager/cert-manager-webhook-5655c58dd6-5mx92: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:41 crc kubenswrapper[4858]: E1205 14:06:41.777228 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb76164b-d338-4395-af71-e6dd098c165f-kube-api-access-55tj5 podName:cb76164b-d338-4395-af71-e6dd098c165f nodeName:}" failed. No retries permitted until 2025-12-05 14:06:42.277216439 +0000 UTC m=+610.824814778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55tj5" (UniqueName: "kubernetes.io/projected/cb76164b-d338-4395-af71-e6dd098c165f-kube-api-access-55tj5") pod "cert-manager-webhook-5655c58dd6-5mx92" (UID: "cb76164b-d338-4395-af71-e6dd098c165f") : failed to sync configmap cache: timed out waiting for the condition Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.365876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsmf\" (UniqueName: \"kubernetes.io/projected/f371340e-c0a7-4cce-9a93-aee21a8c39f1-kube-api-access-pbsmf\") pod \"cert-manager-5b446d88c5-qg6fx\" (UID: \"f371340e-c0a7-4cce-9a93-aee21a8c39f1\") " pod="cert-manager/cert-manager-5b446d88c5-qg6fx" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.366191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55tj5\" (UniqueName: \"kubernetes.io/projected/cb76164b-d338-4395-af71-e6dd098c165f-kube-api-access-55tj5\") pod \"cert-manager-webhook-5655c58dd6-5mx92\" (UID: \"cb76164b-d338-4395-af71-e6dd098c165f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.366263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lbn5\" (UniqueName: \"kubernetes.io/projected/473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7-kube-api-access-2lbn5\") pod \"cert-manager-cainjector-7f985d654d-tpcgh\" (UID: \"473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.372741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55tj5\" (UniqueName: \"kubernetes.io/projected/cb76164b-d338-4395-af71-e6dd098c165f-kube-api-access-55tj5\") pod \"cert-manager-webhook-5655c58dd6-5mx92\" (UID: \"cb76164b-d338-4395-af71-e6dd098c165f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.372929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lbn5\" (UniqueName: \"kubernetes.io/projected/473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7-kube-api-access-2lbn5\") pod \"cert-manager-cainjector-7f985d654d-tpcgh\" (UID: \"473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.376702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsmf\" (UniqueName: \"kubernetes.io/projected/f371340e-c0a7-4cce-9a93-aee21a8c39f1-kube-api-access-pbsmf\") pod \"cert-manager-5b446d88c5-qg6fx\" (UID: \"f371340e-c0a7-4cce-9a93-aee21a8c39f1\") " pod="cert-manager/cert-manager-5b446d88c5-qg6fx" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.617950 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.632635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-qg6fx" Dec 05 14:06:42 crc kubenswrapper[4858]: I1205 14:06:42.641220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.048101 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-qg6fx"] Dec 05 14:06:43 crc kubenswrapper[4858]: W1205 14:06:43.059969 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf371340e_c0a7_4cce_9a93_aee21a8c39f1.slice/crio-3d038dacd865dfbca59ff4def153910c03b9ee8411947ca866d302daa64182b5 WatchSource:0}: Error finding container 3d038dacd865dfbca59ff4def153910c03b9ee8411947ca866d302daa64182b5: Status 404 returned error can't find the container with id 3d038dacd865dfbca59ff4def153910c03b9ee8411947ca866d302daa64182b5 Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.063138 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.099274 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-5mx92"] Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.103120 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-tpcgh"] Dec 05 14:06:43 crc kubenswrapper[4858]: W1205 14:06:43.105052 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod473da77c_e0fe_4c92_ae6b_6dcb9e12e4e7.slice/crio-4763e6ec1d75091041d5c26bbfe5466327a85a2438eff0680ef41d65a97393d1 WatchSource:0}: Error finding container 4763e6ec1d75091041d5c26bbfe5466327a85a2438eff0680ef41d65a97393d1: Status 404 returned error can't find the container with id 4763e6ec1d75091041d5c26bbfe5466327a85a2438eff0680ef41d65a97393d1 Dec 05 14:06:43 crc kubenswrapper[4858]: W1205 14:06:43.105908 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb76164b_d338_4395_af71_e6dd098c165f.slice/crio-2ad47a41013d18be172937696e609a766ff64d8ee64117f4295ad9c81aa1d7e8 WatchSource:0}: Error finding container 2ad47a41013d18be172937696e609a766ff64d8ee64117f4295ad9c81aa1d7e8: Status 404 returned error can't find the container with id 2ad47a41013d18be172937696e609a766ff64d8ee64117f4295ad9c81aa1d7e8 Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.981606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" event={"ID":"cb76164b-d338-4395-af71-e6dd098c165f","Type":"ContainerStarted","Data":"2ad47a41013d18be172937696e609a766ff64d8ee64117f4295ad9c81aa1d7e8"} Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.983623 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-qg6fx" event={"ID":"f371340e-c0a7-4cce-9a93-aee21a8c39f1","Type":"ContainerStarted","Data":"3d038dacd865dfbca59ff4def153910c03b9ee8411947ca866d302daa64182b5"} Dec 05 14:06:43 crc kubenswrapper[4858]: I1205 14:06:43.984403 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" event={"ID":"473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7","Type":"ContainerStarted","Data":"4763e6ec1d75091041d5c26bbfe5466327a85a2438eff0680ef41d65a97393d1"} Dec 05 14:06:44 crc kubenswrapper[4858]: I1205 14:06:44.759797 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:06:44 crc kubenswrapper[4858]: I1205 14:06:44.759885 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:06:46 crc kubenswrapper[4858]: I1205 14:06:46.007071 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" event={"ID":"473da77c-e0fe-4c92-ae6b-6dcb9e12e4e7","Type":"ContainerStarted","Data":"d362efa27a6db406097929557a3589d336e543ca40a42830b2d4dca37d2f648e"} Dec 05 14:06:46 crc kubenswrapper[4858]: I1205 14:06:46.012651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" event={"ID":"cb76164b-d338-4395-af71-e6dd098c165f","Type":"ContainerStarted","Data":"52c402d753cb402fcc292ca85ca222a17c7346314c40be0536250023433b613a"} Dec 05 14:06:46 crc kubenswrapper[4858]: I1205 14:06:46.013491 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:46 crc kubenswrapper[4858]: I1205 14:06:46.027888 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-tpcgh" podStartSLOduration=3.798730238 podStartE2EDuration="6.027870522s" podCreationTimestamp="2025-12-05 14:06:40 +0000 UTC" firstStartedPulling="2025-12-05 14:06:43.108185037 +0000 UTC m=+611.655783186" lastFinishedPulling="2025-12-05 14:06:45.337325321 +0000 UTC m=+613.884923470" observedRunningTime="2025-12-05 14:06:46.025027064 +0000 UTC m=+614.572625203" watchObservedRunningTime="2025-12-05 14:06:46.027870522 +0000 UTC m=+614.575468661" Dec 05 14:06:47 crc kubenswrapper[4858]: I1205 14:06:47.019188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-qg6fx" event={"ID":"f371340e-c0a7-4cce-9a93-aee21a8c39f1","Type":"ContainerStarted","Data":"414bb82c7b899e9ac2ebbd6f5ea4d4312eca0e08b0e917cdfe684356635c4954"} Dec 05 14:06:47 crc kubenswrapper[4858]: I1205 14:06:47.034760 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-qg6fx" podStartSLOduration=3.952243931 podStartE2EDuration="7.034739035s" podCreationTimestamp="2025-12-05 14:06:40 +0000 UTC" firstStartedPulling="2025-12-05 14:06:43.06267029 +0000 UTC m=+611.610268449" lastFinishedPulling="2025-12-05 14:06:46.145165414 +0000 UTC m=+614.692763553" observedRunningTime="2025-12-05 14:06:47.031889857 +0000 UTC m=+615.579488006" watchObservedRunningTime="2025-12-05 14:06:47.034739035 +0000 UTC m=+615.582337174" Dec 05 14:06:47 crc kubenswrapper[4858]: I1205 14:06:47.036696 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podStartSLOduration=4.765265225 podStartE2EDuration="7.036681487s" podCreationTimestamp="2025-12-05 14:06:40 +0000 UTC" firstStartedPulling="2025-12-05 14:06:43.108193827 +0000 UTC m=+611.655791966" lastFinishedPulling="2025-12-05 14:06:45.379610089 +0000 UTC m=+613.927208228" observedRunningTime="2025-12-05 14:06:46.038614246 +0000 UTC m=+614.586212385" watchObservedRunningTime="2025-12-05 14:06:47.036681487 +0000 UTC m=+615.584279636" Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.198552 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtntj"] Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199392 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-controller" containerID="cri-o://ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199516 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-acl-logging" containerID="cri-o://31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199536 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-node" containerID="cri-o://56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199744 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="sbdb" containerID="cri-o://ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199749 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199709 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="northd" containerID="cri-o://38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.199923 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="nbdb" containerID="cri-o://08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815" gracePeriod=30 Dec 05 14:06:50 crc kubenswrapper[4858]: I1205 14:06:50.235673 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" containerID="cri-o://611593e9406f66fd9b7a45a42975c96597f67d79f43cb9a6f559ac14d2bfb1f5" gracePeriod=30 Dec 05 14:06:52 crc kubenswrapper[4858]: I1205 14:06:52.051871 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/3.log" Dec 05 14:06:52 crc kubenswrapper[4858]: I1205 14:06:52.054744 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-acl-logging/0.log" Dec 05 14:06:52 crc kubenswrapper[4858]: I1205 14:06:52.056838 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe" exitCode=143 Dec 05 14:06:52 crc kubenswrapper[4858]: I1205 14:06:52.056867 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe"} Dec 05 14:06:52 crc kubenswrapper[4858]: I1205 14:06:52.643733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.064916 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovnkube-controller/3.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.068280 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-acl-logging/0.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069296 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-controller/0.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069579 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="611593e9406f66fd9b7a45a42975c96597f67d79f43cb9a6f559ac14d2bfb1f5" exitCode=0 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069602 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283" exitCode=0 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069610 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815" exitCode=0 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069617 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268" exitCode=0 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069626 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2" exitCode=0 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069633 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba" exitCode=0 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069641 4858 generic.go:334] "Generic (PLEG): container finished" podID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerID="ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2" exitCode=143 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"611593e9406f66fd9b7a45a42975c96597f67d79f43cb9a6f559ac14d2bfb1f5"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069740 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.069801 4858 scope.go:117] "RemoveContainer" containerID="5c2f8ac30a1a0efd45dbf21a21ca0ba66e283ac1b65cb9e2f650cc0ef3cfa6af" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.072076 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/2.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.073327 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/1.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.073378 4858 generic.go:334] "Generic (PLEG): container finished" podID="19dac4e8-493c-456c-b8ea-cc1e48b9867c" containerID="bc95bceb703d4245508b3fa427ca29bcfe32dd8543a74a22f2f8c84ce26f20ab" exitCode=2 Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.073408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerDied","Data":"bc95bceb703d4245508b3fa427ca29bcfe32dd8543a74a22f2f8c84ce26f20ab"} Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.074016 4858 scope.go:117] "RemoveContainer" containerID="bc95bceb703d4245508b3fa427ca29bcfe32dd8543a74a22f2f8c84ce26f20ab" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.074290 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fjdj6_openshift-multus(19dac4e8-493c-456c-b8ea-cc1e48b9867c)\"" pod="openshift-multus/multus-fjdj6" podUID="19dac4e8-493c-456c-b8ea-cc1e48b9867c" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.246033 4858 scope.go:117] "RemoveContainer" containerID="1e665618f1d71e3b781fd65603de1517068eec1efecd3d9e175f4f4bc37262f6" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.317084 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-acl-logging/0.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.317735 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-controller/0.log" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.318380 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399097 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-var-lib-openvswitch\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399140 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-netd\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399172 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-ovn-kubernetes\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399194 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-etc-openvswitch\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399223 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-netns\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399270 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399321 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399359 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wl6f\" (UniqueName: \"kubernetes.io/projected/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-kube-api-access-9wl6f\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399388 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-slash\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399408 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-node-log\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399423 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-kubelet\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399450 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-openvswitch\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399465 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-ovn\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-node-log" (OuterVolumeSpecName: "node-log") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399495 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-config\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399523 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-slash" (OuterVolumeSpecName: "host-slash") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399549 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399575 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399579 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-env-overrides\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-log-socket\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-systemd-units\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-script-lib\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-systemd\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399740 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-bin\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399769 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovn-node-metrics-cert\") pod \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\" (UID: \"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d\") " Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399875 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399920 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399957 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400121 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.399325 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400167 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-log-socket" (OuterVolumeSpecName: "log-socket") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400198 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400217 4858 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400229 4858 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400238 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400246 4858 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400255 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400263 4858 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-slash\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400271 4858 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-node-log\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400279 4858 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400288 4858 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400297 4858 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400306 4858 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400316 4858 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.400200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411017 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411470 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-74hrs"] Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411707 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411727 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411740 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-acl-logging" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411752 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-acl-logging" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411761 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kubecfg-setup" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411769 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kubecfg-setup" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411783 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="nbdb" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411791 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="nbdb" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411802 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411809 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411817 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="sbdb" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411841 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="sbdb" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411852 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411860 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411871 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="northd" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411880 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="northd" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411891 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411900 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411910 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-node" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411918 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-node" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.411931 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.411939 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412083 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="northd" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412105 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412114 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412123 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412132 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412142 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="nbdb" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412155 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="kube-rbac-proxy-node" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412165 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-acl-logging" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412192 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovn-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412200 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="sbdb" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.412319 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412329 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: E1205 14:06:53.412339 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412346 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412466 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.412484 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" containerName="ovnkube-controller" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.414439 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.435622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-kube-api-access-9wl6f" (OuterVolumeSpecName: "kube-api-access-9wl6f") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "kube-api-access-9wl6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.436255 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" (UID: "e675fbac-caa5-466d-92d2-e7c6f0dd0d5d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-cni-bin\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-systemd\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501449 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-var-lib-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501463 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovnkube-config\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-kubelet\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501531 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovn-node-metrics-cert\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501549 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-log-socket\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501564 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-node-log\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-ovn\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-run-netns\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovnkube-script-lib\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-cni-netd\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-systemd-units\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-etc-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501729 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtx4\" (UniqueName: \"kubernetes.io/projected/38548cd8-60f6-4535-adb0-c8def63e3b8c-kube-api-access-xgtx4\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-env-overrides\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-slash\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501795 4858 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501807 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wl6f\" (UniqueName: \"kubernetes.io/projected/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-kube-api-access-9wl6f\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501851 4858 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-log-socket\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501865 4858 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501877 4858 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501981 4858 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.501991 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-slash\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-cni-bin\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-slash\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-systemd\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603441 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-run-ovn-kubernetes\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-var-lib-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovnkube-config\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-systemd\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603448 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-cni-bin\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603546 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovn-node-metrics-cert\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603555 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-var-lib-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603612 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-kubelet\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-kubelet\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-log-socket\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603738 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-node-log\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-log-socket\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-ovn\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-node-log\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-run-netns\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovnkube-script-lib\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603905 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-run-netns\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-cni-netd\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603945 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-run-ovn\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-host-cni-netd\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.603992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-systemd-units\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-etc-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604107 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgtx4\" (UniqueName: \"kubernetes.io/projected/38548cd8-60f6-4535-adb0-c8def63e3b8c-kube-api-access-xgtx4\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-env-overrides\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604458 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovnkube-config\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604509 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-etc-openvswitch\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38548cd8-60f6-4535-adb0-c8def63e3b8c-systemd-units\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovnkube-script-lib\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.604732 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38548cd8-60f6-4535-adb0-c8def63e3b8c-env-overrides\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.606574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38548cd8-60f6-4535-adb0-c8def63e3b8c-ovn-node-metrics-cert\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.620122 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgtx4\" (UniqueName: \"kubernetes.io/projected/38548cd8-60f6-4535-adb0-c8def63e3b8c-kube-api-access-xgtx4\") pod \"ovnkube-node-74hrs\" (UID: \"38548cd8-60f6-4535-adb0-c8def63e3b8c\") " pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:53 crc kubenswrapper[4858]: I1205 14:06:53.747575 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.081064 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/2.log" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.085343 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-acl-logging/0.log" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.085811 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtntj_e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/ovn-controller/0.log" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.086198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" event={"ID":"e675fbac-caa5-466d-92d2-e7c6f0dd0d5d","Type":"ContainerDied","Data":"1f4a3222d09201c6993589c29f235f50b4fb2e65ce3bcb82040308b4d801ddd8"} Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.086243 4858 scope.go:117] "RemoveContainer" containerID="611593e9406f66fd9b7a45a42975c96597f67d79f43cb9a6f559ac14d2bfb1f5" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.086471 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtntj" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.089322 4858 generic.go:334] "Generic (PLEG): container finished" podID="38548cd8-60f6-4535-adb0-c8def63e3b8c" containerID="06aa5f8beaf8e4b40691ff81e30ab060f4fe7546d2629e3408d9d4b4fe171c1b" exitCode=0 Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.089436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerDied","Data":"06aa5f8beaf8e4b40691ff81e30ab060f4fe7546d2629e3408d9d4b4fe171c1b"} Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.089533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"39987d99fc4fc36c591d8fed762b01f7675b5c550c96af218b814473b52ef696"} Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.117623 4858 scope.go:117] "RemoveContainer" containerID="ea36dc32521bc1041188a0368c2362552922b923dce6f20a090529140ede5283" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.124950 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtntj"] Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.129504 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtntj"] Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.138109 4858 scope.go:117] "RemoveContainer" containerID="08fac8f8bea7254fb9bf3f2de06d79eaed7c1a4b7753c2a241d0dd916db6a815" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.160989 4858 scope.go:117] "RemoveContainer" containerID="38556633fa678d7ccdd506196df565a7d430b21c3c553c30016d609e827ea268" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.192114 4858 scope.go:117] "RemoveContainer" containerID="8cce9ffae71d3f31da08d55f09cf8479db463f0aed73a7a72c79ef072d142bf2" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.211626 4858 scope.go:117] "RemoveContainer" containerID="56e72e5e45aaf68056d7d1731732dfeb83d49de24ff0871ca541b1d5ed4845ba" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.231685 4858 scope.go:117] "RemoveContainer" containerID="31382aa4b76e6d91f75dfb9f9eca111a03e92f98fa28942ad585377381cbb8fe" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.243907 4858 scope.go:117] "RemoveContainer" containerID="ca3dc6fc621ebf89ea39be720f0f8e018fc15bd309f14f6198ead75402e206d2" Dec 05 14:06:54 crc kubenswrapper[4858]: I1205 14:06:54.259064 4858 scope.go:117] "RemoveContainer" containerID="03d47519ab405ec58776d40c1918d82bc78a00f3b69ed7424361edaad4d2ea9f" Dec 05 14:06:55 crc kubenswrapper[4858]: I1205 14:06:55.099310 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"801a4978f2fa19be870a546d94fed4c2b301c7b2958e09455ca4afabf31d46e5"} Dec 05 14:06:55 crc kubenswrapper[4858]: I1205 14:06:55.099589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"17e6fd553b17a3a73fc02e9c0aeaabf37000c7082021b7e099bc7d0f8e33cea0"} Dec 05 14:06:55 crc kubenswrapper[4858]: I1205 14:06:55.099601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"1d66057435524510708c8bb7bfda57834705a4288e189f7ea43728d6c7407d54"} Dec 05 14:06:55 crc kubenswrapper[4858]: I1205 14:06:55.099611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"9bbf900ffd59745684ec6007586b77f2a459001aac4b464ca90d41fc63988b04"} Dec 05 14:06:55 crc kubenswrapper[4858]: I1205 14:06:55.099620 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"89be59f33535e10f912ed892364927d7ade0e42a1a3a01d09a6486139f14f9e9"} Dec 05 14:06:55 crc kubenswrapper[4858]: I1205 14:06:55.905764 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e675fbac-caa5-466d-92d2-e7c6f0dd0d5d" path="/var/lib/kubelet/pods/e675fbac-caa5-466d-92d2-e7c6f0dd0d5d/volumes" Dec 05 14:06:56 crc kubenswrapper[4858]: I1205 14:06:56.106552 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"cf4a9c9b65ba5107530d2f3aefe60baddd9dbd3fcf0ac9908f7b060675551dd1"} Dec 05 14:06:58 crc kubenswrapper[4858]: I1205 14:06:58.119807 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"d374c07d272ff4d29a3eb243165962475d0457eeae7d6d73225f50b7cb01a26a"} Dec 05 14:07:00 crc kubenswrapper[4858]: I1205 14:07:00.135443 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" event={"ID":"38548cd8-60f6-4535-adb0-c8def63e3b8c","Type":"ContainerStarted","Data":"9d734257fe4c337d6f2ee7519f2f37b394b49dbf43d4a27e775d1955972aaa4b"} Dec 05 14:07:00 crc kubenswrapper[4858]: I1205 14:07:00.136080 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:07:00 crc kubenswrapper[4858]: I1205 14:07:00.136102 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:07:00 crc kubenswrapper[4858]: I1205 14:07:00.166296 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" podStartSLOduration=7.166278785 podStartE2EDuration="7.166278785s" podCreationTimestamp="2025-12-05 14:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:07:00.162040679 +0000 UTC m=+628.709638838" watchObservedRunningTime="2025-12-05 14:07:00.166278785 +0000 UTC m=+628.713876924" Dec 05 14:07:00 crc kubenswrapper[4858]: I1205 14:07:00.170038 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:07:01 crc kubenswrapper[4858]: I1205 14:07:01.140552 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:07:01 crc kubenswrapper[4858]: I1205 14:07:01.166511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:07:06 crc kubenswrapper[4858]: I1205 14:07:06.899100 4858 scope.go:117] "RemoveContainer" containerID="bc95bceb703d4245508b3fa427ca29bcfe32dd8543a74a22f2f8c84ce26f20ab" Dec 05 14:07:06 crc kubenswrapper[4858]: E1205 14:07:06.899544 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fjdj6_openshift-multus(19dac4e8-493c-456c-b8ea-cc1e48b9867c)\"" pod="openshift-multus/multus-fjdj6" podUID="19dac4e8-493c-456c-b8ea-cc1e48b9867c" Dec 05 14:07:14 crc kubenswrapper[4858]: I1205 14:07:14.760131 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:07:14 crc kubenswrapper[4858]: I1205 14:07:14.760632 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:07:14 crc kubenswrapper[4858]: I1205 14:07:14.760679 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:07:14 crc kubenswrapper[4858]: I1205 14:07:14.761264 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b223ebad30a2f7caa7c0f9f256f2d9437e338680d956fb743d7b1bcdf70d4a7c"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:07:14 crc kubenswrapper[4858]: I1205 14:07:14.761328 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://b223ebad30a2f7caa7c0f9f256f2d9437e338680d956fb743d7b1bcdf70d4a7c" gracePeriod=600 Dec 05 14:07:15 crc kubenswrapper[4858]: I1205 14:07:15.214479 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="b223ebad30a2f7caa7c0f9f256f2d9437e338680d956fb743d7b1bcdf70d4a7c" exitCode=0 Dec 05 14:07:15 crc kubenswrapper[4858]: I1205 14:07:15.214527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"b223ebad30a2f7caa7c0f9f256f2d9437e338680d956fb743d7b1bcdf70d4a7c"} Dec 05 14:07:15 crc kubenswrapper[4858]: I1205 14:07:15.214565 4858 scope.go:117] "RemoveContainer" containerID="3ab1fc1ade15987d254249f652eeb63b38a39486edb0297f61ed8eaf801d6fa5" Dec 05 14:07:16 crc kubenswrapper[4858]: I1205 14:07:16.224307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"aeb26ce2f72c5b27c0b5939e948f7b4c1c734a8dc5b04d0306f5422f039d5f18"} Dec 05 14:07:17 crc kubenswrapper[4858]: I1205 14:07:17.899777 4858 scope.go:117] "RemoveContainer" containerID="bc95bceb703d4245508b3fa427ca29bcfe32dd8543a74a22f2f8c84ce26f20ab" Dec 05 14:07:19 crc kubenswrapper[4858]: I1205 14:07:19.252477 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjdj6_19dac4e8-493c-456c-b8ea-cc1e48b9867c/kube-multus/2.log" Dec 05 14:07:19 crc kubenswrapper[4858]: I1205 14:07:19.252772 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjdj6" event={"ID":"19dac4e8-493c-456c-b8ea-cc1e48b9867c","Type":"ContainerStarted","Data":"11185f626346e6c2b839c84d81bae1bb3c6a80b5de95efa185e424d10ea1584d"} Dec 05 14:07:23 crc kubenswrapper[4858]: I1205 14:07:23.770592 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-74hrs" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.730552 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns"] Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.732237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.734084 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.741567 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns"] Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.845592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.845653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgb9q\" (UniqueName: \"kubernetes.io/projected/4ea634ac-28d6-4706-a690-59273a2edaca-kube-api-access-jgb9q\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.845677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.946334 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgb9q\" (UniqueName: \"kubernetes.io/projected/4ea634ac-28d6-4706-a690-59273a2edaca-kube-api-access-jgb9q\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.946384 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.946444 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.946944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.947025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:35 crc kubenswrapper[4858]: I1205 14:07:35.964748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgb9q\" (UniqueName: \"kubernetes.io/projected/4ea634ac-28d6-4706-a690-59273a2edaca-kube-api-access-jgb9q\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:36 crc kubenswrapper[4858]: I1205 14:07:36.049479 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:36 crc kubenswrapper[4858]: I1205 14:07:36.228244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns"] Dec 05 14:07:36 crc kubenswrapper[4858]: W1205 14:07:36.233963 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea634ac_28d6_4706_a690_59273a2edaca.slice/crio-f7f619131987087b5a72ddc41941fe71494a0f6d039bdc99a3a81e74dbf0d2db WatchSource:0}: Error finding container f7f619131987087b5a72ddc41941fe71494a0f6d039bdc99a3a81e74dbf0d2db: Status 404 returned error can't find the container with id f7f619131987087b5a72ddc41941fe71494a0f6d039bdc99a3a81e74dbf0d2db Dec 05 14:07:36 crc kubenswrapper[4858]: I1205 14:07:36.334739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" event={"ID":"4ea634ac-28d6-4706-a690-59273a2edaca","Type":"ContainerStarted","Data":"f7f619131987087b5a72ddc41941fe71494a0f6d039bdc99a3a81e74dbf0d2db"} Dec 05 14:07:37 crc kubenswrapper[4858]: I1205 14:07:37.341332 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ea634ac-28d6-4706-a690-59273a2edaca" containerID="e2552ba2ff3485ca9f4de6ea4cfe5c0c805b8b2de3f7b207a846dcf79646a30e" exitCode=0 Dec 05 14:07:37 crc kubenswrapper[4858]: I1205 14:07:37.341427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" event={"ID":"4ea634ac-28d6-4706-a690-59273a2edaca","Type":"ContainerDied","Data":"e2552ba2ff3485ca9f4de6ea4cfe5c0c805b8b2de3f7b207a846dcf79646a30e"} Dec 05 14:07:40 crc kubenswrapper[4858]: I1205 14:07:40.357195 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ea634ac-28d6-4706-a690-59273a2edaca" containerID="60289fdb2a461e8941c8335e83bf6b9a5d48284376baf8d63e8a509f2d37711d" exitCode=0 Dec 05 14:07:40 crc kubenswrapper[4858]: I1205 14:07:40.357256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" event={"ID":"4ea634ac-28d6-4706-a690-59273a2edaca","Type":"ContainerDied","Data":"60289fdb2a461e8941c8335e83bf6b9a5d48284376baf8d63e8a509f2d37711d"} Dec 05 14:07:41 crc kubenswrapper[4858]: I1205 14:07:41.365157 4858 generic.go:334] "Generic (PLEG): container finished" podID="4ea634ac-28d6-4706-a690-59273a2edaca" containerID="6d086d29172348338eef3bce709e3c2996b962a7a34a69021e87c516c0c62f34" exitCode=0 Dec 05 14:07:41 crc kubenswrapper[4858]: I1205 14:07:41.365245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" event={"ID":"4ea634ac-28d6-4706-a690-59273a2edaca","Type":"ContainerDied","Data":"6d086d29172348338eef3bce709e3c2996b962a7a34a69021e87c516c0c62f34"} Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.625567 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.758676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgb9q\" (UniqueName: \"kubernetes.io/projected/4ea634ac-28d6-4706-a690-59273a2edaca-kube-api-access-jgb9q\") pod \"4ea634ac-28d6-4706-a690-59273a2edaca\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.758876 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-bundle\") pod \"4ea634ac-28d6-4706-a690-59273a2edaca\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.758938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-util\") pod \"4ea634ac-28d6-4706-a690-59273a2edaca\" (UID: \"4ea634ac-28d6-4706-a690-59273a2edaca\") " Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.760058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-bundle" (OuterVolumeSpecName: "bundle") pod "4ea634ac-28d6-4706-a690-59273a2edaca" (UID: "4ea634ac-28d6-4706-a690-59273a2edaca"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.763698 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea634ac-28d6-4706-a690-59273a2edaca-kube-api-access-jgb9q" (OuterVolumeSpecName: "kube-api-access-jgb9q") pod "4ea634ac-28d6-4706-a690-59273a2edaca" (UID: "4ea634ac-28d6-4706-a690-59273a2edaca"). InnerVolumeSpecName "kube-api-access-jgb9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.769214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-util" (OuterVolumeSpecName: "util") pod "4ea634ac-28d6-4706-a690-59273a2edaca" (UID: "4ea634ac-28d6-4706-a690-59273a2edaca"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.860170 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-util\") on node \"crc\" DevicePath \"\"" Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.860211 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgb9q\" (UniqueName: \"kubernetes.io/projected/4ea634ac-28d6-4706-a690-59273a2edaca-kube-api-access-jgb9q\") on node \"crc\" DevicePath \"\"" Dec 05 14:07:42 crc kubenswrapper[4858]: I1205 14:07:42.860225 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4ea634ac-28d6-4706-a690-59273a2edaca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:07:43 crc kubenswrapper[4858]: I1205 14:07:43.375538 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" event={"ID":"4ea634ac-28d6-4706-a690-59273a2edaca","Type":"ContainerDied","Data":"f7f619131987087b5a72ddc41941fe71494a0f6d039bdc99a3a81e74dbf0d2db"} Dec 05 14:07:43 crc kubenswrapper[4858]: I1205 14:07:43.375577 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7f619131987087b5a72ddc41941fe71494a0f6d039bdc99a3a81e74dbf0d2db" Dec 05 14:07:43 crc kubenswrapper[4858]: I1205 14:07:43.375634 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ff5jns" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.622548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv"] Dec 05 14:07:44 crc kubenswrapper[4858]: E1205 14:07:44.623324 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="pull" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.623341 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="pull" Dec 05 14:07:44 crc kubenswrapper[4858]: E1205 14:07:44.623365 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="extract" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.623371 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="extract" Dec 05 14:07:44 crc kubenswrapper[4858]: E1205 14:07:44.623384 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="util" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.623391 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="util" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.623503 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea634ac-28d6-4706-a690-59273a2edaca" containerName="extract" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.624012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.627155 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.627404 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7wph8" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.627530 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.653357 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv"] Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.685456 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nldvc\" (UniqueName: \"kubernetes.io/projected/8ed4b460-7987-440e-803e-12e2916b71ae-kube-api-access-nldvc\") pod \"nmstate-operator-5b5b58f5c8-x4ffv\" (UID: \"8ed4b460-7987-440e-803e-12e2916b71ae\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.786999 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nldvc\" (UniqueName: \"kubernetes.io/projected/8ed4b460-7987-440e-803e-12e2916b71ae-kube-api-access-nldvc\") pod \"nmstate-operator-5b5b58f5c8-x4ffv\" (UID: \"8ed4b460-7987-440e-803e-12e2916b71ae\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.807682 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nldvc\" (UniqueName: \"kubernetes.io/projected/8ed4b460-7987-440e-803e-12e2916b71ae-kube-api-access-nldvc\") pod \"nmstate-operator-5b5b58f5c8-x4ffv\" (UID: \"8ed4b460-7987-440e-803e-12e2916b71ae\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" Dec 05 14:07:44 crc kubenswrapper[4858]: I1205 14:07:44.940699 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" Dec 05 14:07:45 crc kubenswrapper[4858]: I1205 14:07:45.187414 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv"] Dec 05 14:07:45 crc kubenswrapper[4858]: W1205 14:07:45.210521 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ed4b460_7987_440e_803e_12e2916b71ae.slice/crio-7415450ce22e311bed294e4c1876b19aceefc2ed65c6a43e6d85fc5880a0fe1b WatchSource:0}: Error finding container 7415450ce22e311bed294e4c1876b19aceefc2ed65c6a43e6d85fc5880a0fe1b: Status 404 returned error can't find the container with id 7415450ce22e311bed294e4c1876b19aceefc2ed65c6a43e6d85fc5880a0fe1b Dec 05 14:07:45 crc kubenswrapper[4858]: I1205 14:07:45.386978 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" event={"ID":"8ed4b460-7987-440e-803e-12e2916b71ae","Type":"ContainerStarted","Data":"7415450ce22e311bed294e4c1876b19aceefc2ed65c6a43e6d85fc5880a0fe1b"} Dec 05 14:07:48 crc kubenswrapper[4858]: I1205 14:07:48.402559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" event={"ID":"8ed4b460-7987-440e-803e-12e2916b71ae","Type":"ContainerStarted","Data":"80496c32e43471e73604faa400471ca63fad1989719f7e8134d059a2f934ac34"} Dec 05 14:07:48 crc kubenswrapper[4858]: I1205 14:07:48.421879 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-x4ffv" podStartSLOduration=1.6073399940000002 podStartE2EDuration="4.421857379s" podCreationTimestamp="2025-12-05 14:07:44 +0000 UTC" firstStartedPulling="2025-12-05 14:07:45.216103254 +0000 UTC m=+673.763701393" lastFinishedPulling="2025-12-05 14:07:48.030620639 +0000 UTC m=+676.578218778" observedRunningTime="2025-12-05 14:07:48.416858207 +0000 UTC m=+676.964456346" watchObservedRunningTime="2025-12-05 14:07:48.421857379 +0000 UTC m=+676.969455528" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.398895 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.400220 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.406524 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.410459 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hjkvn" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.421433 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.422133 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.425984 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.450604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jplm8\" (UniqueName: \"kubernetes.io/projected/4b3d39ce-7f49-470b-af52-6895f872f60d-kube-api-access-jplm8\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.450683 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4b3d39ce-7f49-470b-af52-6895f872f60d-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.450909 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6t52\" (UniqueName: \"kubernetes.io/projected/19816b09-99e3-4d46-8461-0de8d01b86b5-kube-api-access-f6t52\") pod \"nmstate-metrics-7f946cbc9-fr8q8\" (UID: \"19816b09-99e3-4d46-8461-0de8d01b86b5\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.460924 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-f2tv5"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.461668 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.462756 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-dbus-socket\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6t52\" (UniqueName: \"kubernetes.io/projected/19816b09-99e3-4d46-8461-0de8d01b86b5-kube-api-access-f6t52\") pod \"nmstate-metrics-7f946cbc9-fr8q8\" (UID: \"19816b09-99e3-4d46-8461-0de8d01b86b5\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-ovs-socket\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-nmstate-lock\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552504 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jplm8\" (UniqueName: \"kubernetes.io/projected/4b3d39ce-7f49-470b-af52-6895f872f60d-kube-api-access-jplm8\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4b3d39ce-7f49-470b-af52-6895f872f60d-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.552630 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcg68\" (UniqueName: \"kubernetes.io/projected/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-kube-api-access-dcg68\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: E1205 14:07:49.552757 4858 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 05 14:07:49 crc kubenswrapper[4858]: E1205 14:07:49.552815 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b3d39ce-7f49-470b-af52-6895f872f60d-tls-key-pair podName:4b3d39ce-7f49-470b-af52-6895f872f60d nodeName:}" failed. No retries permitted until 2025-12-05 14:07:50.052798111 +0000 UTC m=+678.600396250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4b3d39ce-7f49-470b-af52-6895f872f60d-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-mz5j7" (UID: "4b3d39ce-7f49-470b-af52-6895f872f60d") : secret "openshift-nmstate-webhook" not found Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.573021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jplm8\" (UniqueName: \"kubernetes.io/projected/4b3d39ce-7f49-470b-af52-6895f872f60d-kube-api-access-jplm8\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.577544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6t52\" (UniqueName: \"kubernetes.io/projected/19816b09-99e3-4d46-8461-0de8d01b86b5-kube-api-access-f6t52\") pod \"nmstate-metrics-7f946cbc9-fr8q8\" (UID: \"19816b09-99e3-4d46-8461-0de8d01b86b5\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.597692 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.598535 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.605911 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.606042 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rwm9v" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.613543 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.622506 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-dbus-socket\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654204 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp5h\" (UniqueName: \"kubernetes.io/projected/4fac936d-c29f-486a-a6d0-756aa6ada599-kube-api-access-qvp5h\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-ovs-socket\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-nmstate-lock\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654260 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4fac936d-c29f-486a-a6d0-756aa6ada599-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654278 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac936d-c29f-486a-a6d0-756aa6ada599-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654324 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcg68\" (UniqueName: \"kubernetes.io/projected/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-kube-api-access-dcg68\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-ovs-socket\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-nmstate-lock\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.654774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-dbus-socket\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.673400 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcg68\" (UniqueName: \"kubernetes.io/projected/1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1-kube-api-access-dcg68\") pod \"nmstate-handler-f2tv5\" (UID: \"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1\") " pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.723141 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.755234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp5h\" (UniqueName: \"kubernetes.io/projected/4fac936d-c29f-486a-a6d0-756aa6ada599-kube-api-access-qvp5h\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.755279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4fac936d-c29f-486a-a6d0-756aa6ada599-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.755298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac936d-c29f-486a-a6d0-756aa6ada599-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: E1205 14:07:49.755410 4858 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 05 14:07:49 crc kubenswrapper[4858]: E1205 14:07:49.755457 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fac936d-c29f-486a-a6d0-756aa6ada599-plugin-serving-cert podName:4fac936d-c29f-486a-a6d0-756aa6ada599 nodeName:}" failed. No retries permitted until 2025-12-05 14:07:50.255441524 +0000 UTC m=+678.803039663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/4fac936d-c29f-486a-a6d0-756aa6ada599-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-2r8xh" (UID: "4fac936d-c29f-486a-a6d0-756aa6ada599") : secret "plugin-serving-cert" not found Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.756489 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4fac936d-c29f-486a-a6d0-756aa6ada599-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.775473 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp5h\" (UniqueName: \"kubernetes.io/projected/4fac936d-c29f-486a-a6d0-756aa6ada599-kube-api-access-qvp5h\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.777138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.797440 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85b6884698-jg67f"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.798165 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.858621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-console-config\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.858935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/edd4d801-d89a-48f7-a598-9011f83ceefd-console-serving-cert\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.858962 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-trusted-ca-bundle\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.858982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/edd4d801-d89a-48f7-a598-9011f83ceefd-console-oauth-config\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.859024 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/edd4d801-d89a-48f7-a598-9011f83ceefd-kube-api-access-qmxgh\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.859085 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-oauth-serving-cert\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.859114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-service-ca\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.860159 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85b6884698-jg67f"] Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.959858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/edd4d801-d89a-48f7-a598-9011f83ceefd-kube-api-access-qmxgh\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.959956 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-oauth-serving-cert\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.959989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-service-ca\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.960056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-console-config\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.960095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/edd4d801-d89a-48f7-a598-9011f83ceefd-console-serving-cert\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.960125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-trusted-ca-bundle\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.960151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/edd4d801-d89a-48f7-a598-9011f83ceefd-console-oauth-config\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.962988 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-oauth-serving-cert\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.963635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-service-ca\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.967397 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/edd4d801-d89a-48f7-a598-9011f83ceefd-console-oauth-config\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.969010 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-trusted-ca-bundle\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.969608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/edd4d801-d89a-48f7-a598-9011f83ceefd-console-config\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.970570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/edd4d801-d89a-48f7-a598-9011f83ceefd-console-serving-cert\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:49 crc kubenswrapper[4858]: I1205 14:07:49.984323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmxgh\" (UniqueName: \"kubernetes.io/projected/edd4d801-d89a-48f7-a598-9011f83ceefd-kube-api-access-qmxgh\") pod \"console-85b6884698-jg67f\" (UID: \"edd4d801-d89a-48f7-a598-9011f83ceefd\") " pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.061498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4b3d39ce-7f49-470b-af52-6895f872f60d-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.065447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4b3d39ce-7f49-470b-af52-6895f872f60d-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-mz5j7\" (UID: \"4b3d39ce-7f49-470b-af52-6895f872f60d\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.109114 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8"] Dec 05 14:07:50 crc kubenswrapper[4858]: W1205 14:07:50.114945 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19816b09_99e3_4d46_8461_0de8d01b86b5.slice/crio-6323e317ed733ec2ba36401268a4cd4dd8a67691aab2c5d1f1e59e88bb431670 WatchSource:0}: Error finding container 6323e317ed733ec2ba36401268a4cd4dd8a67691aab2c5d1f1e59e88bb431670: Status 404 returned error can't find the container with id 6323e317ed733ec2ba36401268a4cd4dd8a67691aab2c5d1f1e59e88bb431670 Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.116329 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.264927 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac936d-c29f-486a-a6d0-756aa6ada599-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.269903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac936d-c29f-486a-a6d0-756aa6ada599-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-2r8xh\" (UID: \"4fac936d-c29f-486a-a6d0-756aa6ada599\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.317482 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85b6884698-jg67f"] Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.341533 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.418184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-f2tv5" event={"ID":"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1","Type":"ContainerStarted","Data":"79bd5541590ff8bca2b3154f45d6acd1e76a585ef4377de8fdde8f7d7756eb82"} Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.420764 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85b6884698-jg67f" event={"ID":"edd4d801-d89a-48f7-a598-9011f83ceefd","Type":"ContainerStarted","Data":"f50b1a1dfcdcb61b8d8c82870295e3a674d0ffc95f43b9bd329767b566c0b5f1"} Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.421560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" event={"ID":"19816b09-99e3-4d46-8461-0de8d01b86b5","Type":"ContainerStarted","Data":"6323e317ed733ec2ba36401268a4cd4dd8a67691aab2c5d1f1e59e88bb431670"} Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.534986 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.551605 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7"] Dec 05 14:07:50 crc kubenswrapper[4858]: I1205 14:07:50.755333 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh"] Dec 05 14:07:51 crc kubenswrapper[4858]: I1205 14:07:51.431251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" event={"ID":"4b3d39ce-7f49-470b-af52-6895f872f60d","Type":"ContainerStarted","Data":"63a4f95698b47cb91624a1caf9630d7ecafc1329204930c2bf2e6aff696a0bab"} Dec 05 14:07:51 crc kubenswrapper[4858]: I1205 14:07:51.432401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" event={"ID":"4fac936d-c29f-486a-a6d0-756aa6ada599","Type":"ContainerStarted","Data":"017e64175f818fd721a22e37dd20f26f8485dd2c9110e38743039cf3e782c3be"} Dec 05 14:07:51 crc kubenswrapper[4858]: I1205 14:07:51.433747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85b6884698-jg67f" event={"ID":"edd4d801-d89a-48f7-a598-9011f83ceefd","Type":"ContainerStarted","Data":"1a6a6fb1c75c83493931451327dbf14a0aba4fde9b537e86c41513161fe7de56"} Dec 05 14:07:51 crc kubenswrapper[4858]: I1205 14:07:51.450046 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85b6884698-jg67f" podStartSLOduration=2.450028572 podStartE2EDuration="2.450028572s" podCreationTimestamp="2025-12-05 14:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:07:51.446536449 +0000 UTC m=+679.994134598" watchObservedRunningTime="2025-12-05 14:07:51.450028572 +0000 UTC m=+679.997626711" Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.487887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" event={"ID":"4b3d39ce-7f49-470b-af52-6895f872f60d","Type":"ContainerStarted","Data":"716ad0540499e4f45fcfcfebf00e4f32078671a7ac0bc8cd44dfefc8e2786ab4"} Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.488382 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.490907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-f2tv5" event={"ID":"1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1","Type":"ContainerStarted","Data":"9a814eabc7ac6ab84048c15bb0f813e916701012911f817c330d87fd0e8251a9"} Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.491063 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.492237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" event={"ID":"4fac936d-c29f-486a-a6d0-756aa6ada599","Type":"ContainerStarted","Data":"7474e60e1b70f6c90318e1b9da95ebee300d95567b7e7cdfbf4c5982e6a8a9e3"} Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.493393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" event={"ID":"19816b09-99e3-4d46-8461-0de8d01b86b5","Type":"ContainerStarted","Data":"e3d7db371359331fb62fcd88ddcb19bc27ccdac24c49e23b3b4248df039714b9"} Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.503372 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" podStartSLOduration=2.780778359 podStartE2EDuration="6.503356075s" podCreationTimestamp="2025-12-05 14:07:49 +0000 UTC" firstStartedPulling="2025-12-05 14:07:50.558340559 +0000 UTC m=+679.105938698" lastFinishedPulling="2025-12-05 14:07:54.280918275 +0000 UTC m=+682.828516414" observedRunningTime="2025-12-05 14:07:55.501784873 +0000 UTC m=+684.049383012" watchObservedRunningTime="2025-12-05 14:07:55.503356075 +0000 UTC m=+684.050954214" Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.535698 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-f2tv5" podStartSLOduration=2.101280825 podStartE2EDuration="6.535678577s" podCreationTimestamp="2025-12-05 14:07:49 +0000 UTC" firstStartedPulling="2025-12-05 14:07:49.837720448 +0000 UTC m=+678.385318587" lastFinishedPulling="2025-12-05 14:07:54.2721182 +0000 UTC m=+682.819716339" observedRunningTime="2025-12-05 14:07:55.533361546 +0000 UTC m=+684.080959685" watchObservedRunningTime="2025-12-05 14:07:55.535678577 +0000 UTC m=+684.083276716" Dec 05 14:07:55 crc kubenswrapper[4858]: I1205 14:07:55.557472 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-2r8xh" podStartSLOduration=3.050123241 podStartE2EDuration="6.557457578s" podCreationTimestamp="2025-12-05 14:07:49 +0000 UTC" firstStartedPulling="2025-12-05 14:07:50.764779333 +0000 UTC m=+679.312377472" lastFinishedPulling="2025-12-05 14:07:54.27211367 +0000 UTC m=+682.819711809" observedRunningTime="2025-12-05 14:07:55.554840438 +0000 UTC m=+684.102438577" watchObservedRunningTime="2025-12-05 14:07:55.557457578 +0000 UTC m=+684.105055717" Dec 05 14:07:57 crc kubenswrapper[4858]: I1205 14:07:57.508869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" event={"ID":"19816b09-99e3-4d46-8461-0de8d01b86b5","Type":"ContainerStarted","Data":"14c9265fb6ef3a78cf23f46601f8e8f795ab65b3bcab6dd7e3bd02737b57364d"} Dec 05 14:07:57 crc kubenswrapper[4858]: I1205 14:07:57.523654 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-fr8q8" podStartSLOduration=2.151247227 podStartE2EDuration="8.523635707s" podCreationTimestamp="2025-12-05 14:07:49 +0000 UTC" firstStartedPulling="2025-12-05 14:07:50.118075152 +0000 UTC m=+678.665673291" lastFinishedPulling="2025-12-05 14:07:56.490463632 +0000 UTC m=+685.038061771" observedRunningTime="2025-12-05 14:07:57.52187874 +0000 UTC m=+686.069476899" watchObservedRunningTime="2025-12-05 14:07:57.523635707 +0000 UTC m=+686.071233856" Dec 05 14:07:59 crc kubenswrapper[4858]: I1205 14:07:59.808927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-f2tv5" Dec 05 14:08:00 crc kubenswrapper[4858]: I1205 14:08:00.116613 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:08:00 crc kubenswrapper[4858]: I1205 14:08:00.116950 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:08:00 crc kubenswrapper[4858]: I1205 14:08:00.121558 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:08:00 crc kubenswrapper[4858]: I1205 14:08:00.528370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85b6884698-jg67f" Dec 05 14:08:00 crc kubenswrapper[4858]: I1205 14:08:00.575489 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x25gp"] Dec 05 14:08:10 crc kubenswrapper[4858]: I1205 14:08:10.348894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.381776 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9"] Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.383298 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.384975 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.395024 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9"] Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.535796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.535912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dptt\" (UniqueName: \"kubernetes.io/projected/b4f84a04-efe1-4685-9eef-c4518905ccaf-kube-api-access-6dptt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.535958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.637341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dptt\" (UniqueName: \"kubernetes.io/projected/b4f84a04-efe1-4685-9eef-c4518905ccaf-kube-api-access-6dptt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.637734 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.637918 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.638453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.638473 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.657363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dptt\" (UniqueName: \"kubernetes.io/projected/b4f84a04-efe1-4685-9eef-c4518905ccaf-kube-api-access-6dptt\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.698528 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:23 crc kubenswrapper[4858]: I1205 14:08:23.897608 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9"] Dec 05 14:08:23 crc kubenswrapper[4858]: W1205 14:08:23.909069 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4f84a04_efe1_4685_9eef_c4518905ccaf.slice/crio-e81823a4393bc30fd974ee6e78c238ceb7e79eb0c4feb06bf848e856c5277ccb WatchSource:0}: Error finding container e81823a4393bc30fd974ee6e78c238ceb7e79eb0c4feb06bf848e856c5277ccb: Status 404 returned error can't find the container with id e81823a4393bc30fd974ee6e78c238ceb7e79eb0c4feb06bf848e856c5277ccb Dec 05 14:08:24 crc kubenswrapper[4858]: I1205 14:08:24.656764 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerID="0955e27586608b88134ae3324cf6b36409863fc04859c07126fafdb848cd313a" exitCode=0 Dec 05 14:08:24 crc kubenswrapper[4858]: I1205 14:08:24.656800 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" event={"ID":"b4f84a04-efe1-4685-9eef-c4518905ccaf","Type":"ContainerDied","Data":"0955e27586608b88134ae3324cf6b36409863fc04859c07126fafdb848cd313a"} Dec 05 14:08:24 crc kubenswrapper[4858]: I1205 14:08:24.656853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" event={"ID":"b4f84a04-efe1-4685-9eef-c4518905ccaf","Type":"ContainerStarted","Data":"e81823a4393bc30fd974ee6e78c238ceb7e79eb0c4feb06bf848e856c5277ccb"} Dec 05 14:08:25 crc kubenswrapper[4858]: I1205 14:08:25.642960 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x25gp" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerName="console" containerID="cri-o://df560568844a9e0e9ced309a9d458ca9b9c1c357374e4e5c02d83679a7ccd1ce" gracePeriod=15 Dec 05 14:08:26 crc kubenswrapper[4858]: I1205 14:08:26.673951 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x25gp_1329b103-5d7b-492b-96ed-c7b5b10e8edd/console/0.log" Dec 05 14:08:26 crc kubenswrapper[4858]: I1205 14:08:26.673998 4858 generic.go:334] "Generic (PLEG): container finished" podID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerID="df560568844a9e0e9ced309a9d458ca9b9c1c357374e4e5c02d83679a7ccd1ce" exitCode=2 Dec 05 14:08:26 crc kubenswrapper[4858]: I1205 14:08:26.674025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x25gp" event={"ID":"1329b103-5d7b-492b-96ed-c7b5b10e8edd","Type":"ContainerDied","Data":"df560568844a9e0e9ced309a9d458ca9b9c1c357374e4e5c02d83679a7ccd1ce"} Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.388397 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x25gp_1329b103-5d7b-492b-96ed-c7b5b10e8edd/console/0.log" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.388696 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.408255 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-oauth-config\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.408597 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-serving-cert\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.410080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-trusted-ca-bundle\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.410310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-config\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.410632 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-oauth-serving-cert\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.410706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpbtp\" (UniqueName: \"kubernetes.io/projected/1329b103-5d7b-492b-96ed-c7b5b10e8edd-kube-api-access-zpbtp\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.410910 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-service-ca\") pod \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\" (UID: \"1329b103-5d7b-492b-96ed-c7b5b10e8edd\") " Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.413976 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-config" (OuterVolumeSpecName: "console-config") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-service-ca" (OuterVolumeSpecName: "service-ca") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414451 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414696 4858 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414714 4858 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414745 4858 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.414757 4858 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1329b103-5d7b-492b-96ed-c7b5b10e8edd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.418689 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1329b103-5d7b-492b-96ed-c7b5b10e8edd-kube-api-access-zpbtp" (OuterVolumeSpecName: "kube-api-access-zpbtp") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "kube-api-access-zpbtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.430208 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.437363 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1329b103-5d7b-492b-96ed-c7b5b10e8edd" (UID: "1329b103-5d7b-492b-96ed-c7b5b10e8edd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.516254 4858 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.516300 4858 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1329b103-5d7b-492b-96ed-c7b5b10e8edd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.516313 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpbtp\" (UniqueName: \"kubernetes.io/projected/1329b103-5d7b-492b-96ed-c7b5b10e8edd-kube-api-access-zpbtp\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.687025 4858 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x25gp_1329b103-5d7b-492b-96ed-c7b5b10e8edd/console/0.log" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.687375 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x25gp" event={"ID":"1329b103-5d7b-492b-96ed-c7b5b10e8edd","Type":"ContainerDied","Data":"a807aa596b09d99c0278ec930a1d5ee6783b6da60ab51b1d752079aad8eaf1e0"} Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.687415 4858 scope.go:117] "RemoveContainer" containerID="df560568844a9e0e9ced309a9d458ca9b9c1c357374e4e5c02d83679a7ccd1ce" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.688167 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x25gp" Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.691819 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerID="957a3b2f2e8747feb9c316d3e849fe3cdeb6629d5b32dbe91331d767a8df5589" exitCode=0 Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.691894 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" event={"ID":"b4f84a04-efe1-4685-9eef-c4518905ccaf","Type":"ContainerDied","Data":"957a3b2f2e8747feb9c316d3e849fe3cdeb6629d5b32dbe91331d767a8df5589"} Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.726045 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x25gp"] Dec 05 14:08:28 crc kubenswrapper[4858]: I1205 14:08:28.730120 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x25gp"] Dec 05 14:08:29 crc kubenswrapper[4858]: I1205 14:08:29.905907 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" path="/var/lib/kubelet/pods/1329b103-5d7b-492b-96ed-c7b5b10e8edd/volumes" Dec 05 14:08:32 crc kubenswrapper[4858]: I1205 14:08:32.717060 4858 generic.go:334] "Generic (PLEG): container finished" podID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerID="bd0283983838ec7d67fa33dfdf8e12a5f8e27fc3c26ed80ee4e3034128c57e06" exitCode=0 Dec 05 14:08:32 crc kubenswrapper[4858]: I1205 14:08:32.717126 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" event={"ID":"b4f84a04-efe1-4685-9eef-c4518905ccaf","Type":"ContainerDied","Data":"bd0283983838ec7d67fa33dfdf8e12a5f8e27fc3c26ed80ee4e3034128c57e06"} Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.009779 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.127567 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-util\") pod \"b4f84a04-efe1-4685-9eef-c4518905ccaf\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.129082 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dptt\" (UniqueName: \"kubernetes.io/projected/b4f84a04-efe1-4685-9eef-c4518905ccaf-kube-api-access-6dptt\") pod \"b4f84a04-efe1-4685-9eef-c4518905ccaf\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.129152 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-bundle\") pod \"b4f84a04-efe1-4685-9eef-c4518905ccaf\" (UID: \"b4f84a04-efe1-4685-9eef-c4518905ccaf\") " Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.130290 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-bundle" (OuterVolumeSpecName: "bundle") pod "b4f84a04-efe1-4685-9eef-c4518905ccaf" (UID: "b4f84a04-efe1-4685-9eef-c4518905ccaf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.138546 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-util" (OuterVolumeSpecName: "util") pod "b4f84a04-efe1-4685-9eef-c4518905ccaf" (UID: "b4f84a04-efe1-4685-9eef-c4518905ccaf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.138973 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f84a04-efe1-4685-9eef-c4518905ccaf-kube-api-access-6dptt" (OuterVolumeSpecName: "kube-api-access-6dptt") pod "b4f84a04-efe1-4685-9eef-c4518905ccaf" (UID: "b4f84a04-efe1-4685-9eef-c4518905ccaf"). InnerVolumeSpecName "kube-api-access-6dptt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.231541 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dptt\" (UniqueName: \"kubernetes.io/projected/b4f84a04-efe1-4685-9eef-c4518905ccaf-kube-api-access-6dptt\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.232121 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.232204 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4f84a04-efe1-4685-9eef-c4518905ccaf-util\") on node \"crc\" DevicePath \"\"" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.734388 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.734371 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f839c4w9" event={"ID":"b4f84a04-efe1-4685-9eef-c4518905ccaf","Type":"ContainerDied","Data":"e81823a4393bc30fd974ee6e78c238ceb7e79eb0c4feb06bf848e856c5277ccb"} Dec 05 14:08:34 crc kubenswrapper[4858]: I1205 14:08:34.734551 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e81823a4393bc30fd974ee6e78c238ceb7e79eb0c4feb06bf848e856c5277ccb" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.598742 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l"] Dec 05 14:08:46 crc kubenswrapper[4858]: E1205 14:08:46.599463 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="pull" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.599480 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="pull" Dec 05 14:08:46 crc kubenswrapper[4858]: E1205 14:08:46.599492 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerName="console" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.599499 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerName="console" Dec 05 14:08:46 crc kubenswrapper[4858]: E1205 14:08:46.599518 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="util" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.599524 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="util" Dec 05 14:08:46 crc kubenswrapper[4858]: E1205 14:08:46.599544 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="extract" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.599556 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="extract" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.599656 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f84a04-efe1-4685-9eef-c4518905ccaf" containerName="extract" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.599684 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1329b103-5d7b-492b-96ed-c7b5b10e8edd" containerName="console" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.600113 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.605045 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.613236 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.613268 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zmqf5" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.613558 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.613413 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.637011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l"] Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.702690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95292608-d7b7-42dd-aa8d-170acc415017-webhook-cert\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.702742 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95292608-d7b7-42dd-aa8d-170acc415017-apiservice-cert\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.702944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bvv\" (UniqueName: \"kubernetes.io/projected/95292608-d7b7-42dd-aa8d-170acc415017-kube-api-access-p5bvv\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.804076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95292608-d7b7-42dd-aa8d-170acc415017-webhook-cert\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.804996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95292608-d7b7-42dd-aa8d-170acc415017-apiservice-cert\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.805061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bvv\" (UniqueName: \"kubernetes.io/projected/95292608-d7b7-42dd-aa8d-170acc415017-kube-api-access-p5bvv\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.818753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95292608-d7b7-42dd-aa8d-170acc415017-apiservice-cert\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.819510 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95292608-d7b7-42dd-aa8d-170acc415017-webhook-cert\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.827532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bvv\" (UniqueName: \"kubernetes.io/projected/95292608-d7b7-42dd-aa8d-170acc415017-kube-api-access-p5bvv\") pod \"metallb-operator-controller-manager-6b796cf87b-dbj7l\" (UID: \"95292608-d7b7-42dd-aa8d-170acc415017\") " pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:46 crc kubenswrapper[4858]: I1205 14:08:46.919764 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.009971 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx"] Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.010882 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.013504 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.013911 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-th82n" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.014095 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.024909 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx"] Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.111653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvlb\" (UniqueName: \"kubernetes.io/projected/daaa12d2-f682-4ef8-b225-ca15ff2076ba-kube-api-access-chvlb\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.111715 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/daaa12d2-f682-4ef8-b225-ca15ff2076ba-apiservice-cert\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.111762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/daaa12d2-f682-4ef8-b225-ca15ff2076ba-webhook-cert\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.213064 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvlb\" (UniqueName: \"kubernetes.io/projected/daaa12d2-f682-4ef8-b225-ca15ff2076ba-kube-api-access-chvlb\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.213106 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/daaa12d2-f682-4ef8-b225-ca15ff2076ba-apiservice-cert\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.213138 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/daaa12d2-f682-4ef8-b225-ca15ff2076ba-webhook-cert\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.219157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/daaa12d2-f682-4ef8-b225-ca15ff2076ba-webhook-cert\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.220786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/daaa12d2-f682-4ef8-b225-ca15ff2076ba-apiservice-cert\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.236521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvlb\" (UniqueName: \"kubernetes.io/projected/daaa12d2-f682-4ef8-b225-ca15ff2076ba-kube-api-access-chvlb\") pod \"metallb-operator-webhook-server-666bd46db5-6xjlx\" (UID: \"daaa12d2-f682-4ef8-b225-ca15ff2076ba\") " pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.331908 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.347413 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l"] Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.698051 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx"] Dec 05 14:08:47 crc kubenswrapper[4858]: W1205 14:08:47.702098 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaaa12d2_f682_4ef8_b225_ca15ff2076ba.slice/crio-253fa297c14ddac60a5d5f4eefcc1db8663ed1ee6f5d4930f8f7cfc7baf21f64 WatchSource:0}: Error finding container 253fa297c14ddac60a5d5f4eefcc1db8663ed1ee6f5d4930f8f7cfc7baf21f64: Status 404 returned error can't find the container with id 253fa297c14ddac60a5d5f4eefcc1db8663ed1ee6f5d4930f8f7cfc7baf21f64 Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.848219 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" event={"ID":"95292608-d7b7-42dd-aa8d-170acc415017","Type":"ContainerStarted","Data":"0c993174265bd916b333bbf2b7e0c5348e83ee94d574b0c1f6bc4b64eeaf8f25"} Dec 05 14:08:47 crc kubenswrapper[4858]: I1205 14:08:47.849195 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" event={"ID":"daaa12d2-f682-4ef8-b225-ca15ff2076ba","Type":"ContainerStarted","Data":"253fa297c14ddac60a5d5f4eefcc1db8663ed1ee6f5d4930f8f7cfc7baf21f64"} Dec 05 14:08:57 crc kubenswrapper[4858]: I1205 14:08:57.075383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" event={"ID":"95292608-d7b7-42dd-aa8d-170acc415017","Type":"ContainerStarted","Data":"7d5e522690a7ea9366ff613dc3f2126b4d4ae18bf8aa01f115eda94097e205a5"} Dec 05 14:08:57 crc kubenswrapper[4858]: I1205 14:08:57.076007 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:08:57 crc kubenswrapper[4858]: I1205 14:08:57.077146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" event={"ID":"daaa12d2-f682-4ef8-b225-ca15ff2076ba","Type":"ContainerStarted","Data":"1c4027dc0d06040354989623de83c6dcffcf61b8d32b05067b72564594b50e5f"} Dec 05 14:08:57 crc kubenswrapper[4858]: I1205 14:08:57.077251 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:08:57 crc kubenswrapper[4858]: I1205 14:08:57.104468 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" podStartSLOduration=1.927634808 podStartE2EDuration="11.104442195s" podCreationTimestamp="2025-12-05 14:08:46 +0000 UTC" firstStartedPulling="2025-12-05 14:08:47.361347911 +0000 UTC m=+735.908946050" lastFinishedPulling="2025-12-05 14:08:56.538155298 +0000 UTC m=+745.085753437" observedRunningTime="2025-12-05 14:08:57.097756238 +0000 UTC m=+745.645354397" watchObservedRunningTime="2025-12-05 14:08:57.104442195 +0000 UTC m=+745.652040334" Dec 05 14:08:57 crc kubenswrapper[4858]: I1205 14:08:57.125540 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" podStartSLOduration=2.273889479 podStartE2EDuration="11.125520237s" podCreationTimestamp="2025-12-05 14:08:46 +0000 UTC" firstStartedPulling="2025-12-05 14:08:47.705683361 +0000 UTC m=+736.253281500" lastFinishedPulling="2025-12-05 14:08:56.557314119 +0000 UTC m=+745.104912258" observedRunningTime="2025-12-05 14:08:57.123690628 +0000 UTC m=+745.671288777" watchObservedRunningTime="2025-12-05 14:08:57.125520237 +0000 UTC m=+745.673118376" Dec 05 14:09:07 crc kubenswrapper[4858]: I1205 14:09:07.336472 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" Dec 05 14:09:17 crc kubenswrapper[4858]: I1205 14:09:17.873726 4858 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 14:09:26 crc kubenswrapper[4858]: I1205 14:09:26.926757 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b796cf87b-dbj7l" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.825464 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-756vt"] Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.827966 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.829626 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc"] Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.830486 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:27 crc kubenswrapper[4858]: W1205 14:09:27.830535 4858 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-lvwml": failed to list *v1.Secret: secrets "frr-k8s-daemon-dockercfg-lvwml" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 05 14:09:27 crc kubenswrapper[4858]: E1205 14:09:27.830567 4858 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-lvwml\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"frr-k8s-daemon-dockercfg-lvwml\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:09:27 crc kubenswrapper[4858]: W1205 14:09:27.830709 4858 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: configmaps "frr-startup" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 05 14:09:27 crc kubenswrapper[4858]: E1205 14:09:27.830724 4858 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"frr-startup\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:09:27 crc kubenswrapper[4858]: W1205 14:09:27.835152 4858 reflector.go:561] object-"metallb-system"/"frr-k8s-webhook-server-cert": failed to list *v1.Secret: secrets "frr-k8s-webhook-server-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 05 14:09:27 crc kubenswrapper[4858]: E1205 14:09:27.835195 4858 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"frr-k8s-webhook-server-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:09:27 crc kubenswrapper[4858]: W1205 14:09:27.835152 4858 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: secrets "frr-k8s-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Dec 05 14:09:27 crc kubenswrapper[4858]: E1205 14:09:27.835231 4858 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"frr-k8s-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.843596 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc"] Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901786 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a181bba4-2682-4d6a-90cc-12bea5e07d34-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-reloader\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901891 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-metrics-certs\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz2gc\" (UniqueName: \"kubernetes.io/projected/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-kube-api-access-gz2gc\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901934 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gwv\" (UniqueName: \"kubernetes.io/projected/a181bba4-2682-4d6a-90cc-12bea5e07d34-kube-api-access-h5gwv\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901950 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-startup\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901965 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-conf\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.901985 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-metrics\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.902005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-sockets\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.963983 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4bmzv"] Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.965632 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4bmzv" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.967581 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.967800 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.967958 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.968285 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2snpf" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.987491 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-wf646"] Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.988560 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:27 crc kubenswrapper[4858]: I1205 14:09:27.993391 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003260 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metrics-certs\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-reloader\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003359 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7v8\" (UniqueName: \"kubernetes.io/projected/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-kube-api-access-mc7v8\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-metrics-certs\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003444 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz2gc\" (UniqueName: \"kubernetes.io/projected/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-kube-api-access-gz2gc\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5gwv\" (UniqueName: \"kubernetes.io/projected/a181bba4-2682-4d6a-90cc-12bea5e07d34-kube-api-access-h5gwv\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003506 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-startup\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003523 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-conf\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-metrics\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003592 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-sockets\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003652 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metallb-excludel2\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.003674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a181bba4-2682-4d6a-90cc-12bea5e07d34-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.004651 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-reloader\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.004698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-sockets\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.004740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wf646"] Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.005090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-conf\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.005577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-metrics\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.039673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz2gc\" (UniqueName: \"kubernetes.io/projected/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-kube-api-access-gz2gc\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.047504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5gwv\" (UniqueName: \"kubernetes.io/projected/a181bba4-2682-4d6a-90cc-12bea5e07d34-kube-api-access-h5gwv\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metrics-certs\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105408 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad1cb414-76a1-4dba-a006-9fec16fbf90d-metrics-certs\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7v8\" (UniqueName: \"kubernetes.io/projected/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-kube-api-access-mc7v8\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105514 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5pc\" (UniqueName: \"kubernetes.io/projected/ad1cb414-76a1-4dba-a006-9fec16fbf90d-kube-api-access-4p5pc\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad1cb414-76a1-4dba-a006-9fec16fbf90d-cert\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.105777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metallb-excludel2\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: E1205 14:09:28.105945 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 14:09:28 crc kubenswrapper[4858]: E1205 14:09:28.105953 4858 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 05 14:09:28 crc kubenswrapper[4858]: E1205 14:09:28.106056 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist podName:8c029ca1-2a2b-4983-855f-a9e6d7a7d306 nodeName:}" failed. No retries permitted until 2025-12-05 14:09:28.606040389 +0000 UTC m=+777.153638528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist") pod "speaker-4bmzv" (UID: "8c029ca1-2a2b-4983-855f-a9e6d7a7d306") : secret "metallb-memberlist" not found Dec 05 14:09:28 crc kubenswrapper[4858]: E1205 14:09:28.106086 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metrics-certs podName:8c029ca1-2a2b-4983-855f-a9e6d7a7d306 nodeName:}" failed. No retries permitted until 2025-12-05 14:09:28.60607247 +0000 UTC m=+777.153670829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metrics-certs") pod "speaker-4bmzv" (UID: "8c029ca1-2a2b-4983-855f-a9e6d7a7d306") : secret "speaker-certs-secret" not found Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.106926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metallb-excludel2\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.124438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7v8\" (UniqueName: \"kubernetes.io/projected/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-kube-api-access-mc7v8\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.206905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad1cb414-76a1-4dba-a006-9fec16fbf90d-cert\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.207015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad1cb414-76a1-4dba-a006-9fec16fbf90d-metrics-certs\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.207057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p5pc\" (UniqueName: \"kubernetes.io/projected/ad1cb414-76a1-4dba-a006-9fec16fbf90d-kube-api-access-4p5pc\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.211127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad1cb414-76a1-4dba-a006-9fec16fbf90d-metrics-certs\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.211413 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad1cb414-76a1-4dba-a006-9fec16fbf90d-cert\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.223200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p5pc\" (UniqueName: \"kubernetes.io/projected/ad1cb414-76a1-4dba-a006-9fec16fbf90d-kube-api-access-4p5pc\") pod \"controller-f8648f98b-wf646\" (UID: \"ad1cb414-76a1-4dba-a006-9fec16fbf90d\") " pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.306182 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.613234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metrics-certs\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.613770 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: E1205 14:09:28.613954 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 14:09:28 crc kubenswrapper[4858]: E1205 14:09:28.614034 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist podName:8c029ca1-2a2b-4983-855f-a9e6d7a7d306 nodeName:}" failed. No retries permitted until 2025-12-05 14:09:29.614009652 +0000 UTC m=+778.161607791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist") pod "speaker-4bmzv" (UID: "8c029ca1-2a2b-4983-855f-a9e6d7a7d306") : secret "metallb-memberlist" not found Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.620321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-metrics-certs\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.684902 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.696716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-metrics-certs\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.711842 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wf646"] Dec 05 14:09:28 crc kubenswrapper[4858]: I1205 14:09:28.956147 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lvwml" Dec 05 14:09:29 crc kubenswrapper[4858]: E1205 14:09:29.004228 4858 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Dec 05 14:09:29 crc kubenswrapper[4858]: E1205 14:09:29.004317 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a181bba4-2682-4d6a-90cc-12bea5e07d34-cert podName:a181bba4-2682-4d6a-90cc-12bea5e07d34 nodeName:}" failed. No retries permitted until 2025-12-05 14:09:29.504298647 +0000 UTC m=+778.051896776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a181bba4-2682-4d6a-90cc-12bea5e07d34-cert") pod "frr-k8s-webhook-server-7fcb986d4-hh2rc" (UID: "a181bba4-2682-4d6a-90cc-12bea5e07d34") : failed to sync secret cache: timed out waiting for the condition Dec 05 14:09:29 crc kubenswrapper[4858]: E1205 14:09:29.005054 4858 configmap.go:193] Couldn't get configMap metallb-system/frr-startup: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:09:29 crc kubenswrapper[4858]: E1205 14:09:29.005177 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-startup podName:9a3a124e-0ac1-4f2a-aee6-3cae0fd66576 nodeName:}" failed. No retries permitted until 2025-12-05 14:09:29.50515506 +0000 UTC m=+778.052753199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "frr-startup" (UniqueName: "kubernetes.io/configmap/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-startup") pod "frr-k8s-756vt" (UID: "9a3a124e-0ac1-4f2a-aee6-3cae0fd66576") : failed to sync configmap cache: timed out waiting for the condition Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.051351 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.250694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wf646" event={"ID":"ad1cb414-76a1-4dba-a006-9fec16fbf90d","Type":"ContainerStarted","Data":"3f6dceaa56ca5e2d9a28b7865c4bd45a3c6144417e128e898e0c014f1283d656"} Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.397269 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.523662 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-startup\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.523731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a181bba4-2682-4d6a-90cc-12bea5e07d34-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.524732 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9a3a124e-0ac1-4f2a-aee6-3cae0fd66576-frr-startup\") pod \"frr-k8s-756vt\" (UID: \"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576\") " pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.527780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a181bba4-2682-4d6a-90cc-12bea5e07d34-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hh2rc\" (UID: \"a181bba4-2682-4d6a-90cc-12bea5e07d34\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.625019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:29 crc kubenswrapper[4858]: E1205 14:09:29.625407 4858 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 14:09:29 crc kubenswrapper[4858]: E1205 14:09:29.625570 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist podName:8c029ca1-2a2b-4983-855f-a9e6d7a7d306 nodeName:}" failed. No retries permitted until 2025-12-05 14:09:31.625544569 +0000 UTC m=+780.173142708 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist") pod "speaker-4bmzv" (UID: "8c029ca1-2a2b-4983-855f-a9e6d7a7d306") : secret "metallb-memberlist" not found Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.648338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.655432 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:29 crc kubenswrapper[4858]: I1205 14:09:29.979944 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc"] Dec 05 14:09:30 crc kubenswrapper[4858]: I1205 14:09:30.258046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wf646" event={"ID":"ad1cb414-76a1-4dba-a006-9fec16fbf90d","Type":"ContainerStarted","Data":"1d9a1b674699bcae8824686c5501dc171805d6b14beb275b230aa735a9376a3c"} Dec 05 14:09:30 crc kubenswrapper[4858]: I1205 14:09:30.258377 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wf646" event={"ID":"ad1cb414-76a1-4dba-a006-9fec16fbf90d","Type":"ContainerStarted","Data":"6b3b97881fe9a6b284871d527d68c404db54b725f24e74f96897f14f20a3f361"} Dec 05 14:09:30 crc kubenswrapper[4858]: I1205 14:09:30.259770 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:30 crc kubenswrapper[4858]: I1205 14:09:30.260229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" event={"ID":"a181bba4-2682-4d6a-90cc-12bea5e07d34","Type":"ContainerStarted","Data":"554a955df41e7077ca4f21b1dd3b9945a19ccc32790b28797499ebbc80ca33b5"} Dec 05 14:09:30 crc kubenswrapper[4858]: I1205 14:09:30.261479 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"9164f42ba1d5be96ba91fa61bf0507a47514b11db2eb9cff23212d47bd44f2f9"} Dec 05 14:09:31 crc kubenswrapper[4858]: I1205 14:09:31.670923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:31 crc kubenswrapper[4858]: I1205 14:09:31.679321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8c029ca1-2a2b-4983-855f-a9e6d7a7d306-memberlist\") pod \"speaker-4bmzv\" (UID: \"8c029ca1-2a2b-4983-855f-a9e6d7a7d306\") " pod="metallb-system/speaker-4bmzv" Dec 05 14:09:31 crc kubenswrapper[4858]: I1205 14:09:31.896335 4858 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2snpf" Dec 05 14:09:31 crc kubenswrapper[4858]: I1205 14:09:31.905039 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4bmzv" Dec 05 14:09:31 crc kubenswrapper[4858]: I1205 14:09:31.967448 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-wf646" podStartSLOduration=4.967425145 podStartE2EDuration="4.967425145s" podCreationTimestamp="2025-12-05 14:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:09:30.281761244 +0000 UTC m=+778.829359383" watchObservedRunningTime="2025-12-05 14:09:31.967425145 +0000 UTC m=+780.515023284" Dec 05 14:09:31 crc kubenswrapper[4858]: W1205 14:09:31.997449 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c029ca1_2a2b_4983_855f_a9e6d7a7d306.slice/crio-2c6c747d473b8809e487f2769d55e305b4ab7e258708b05be6eafd25ad943890 WatchSource:0}: Error finding container 2c6c747d473b8809e487f2769d55e305b4ab7e258708b05be6eafd25ad943890: Status 404 returned error can't find the container with id 2c6c747d473b8809e487f2769d55e305b4ab7e258708b05be6eafd25ad943890 Dec 05 14:09:32 crc kubenswrapper[4858]: I1205 14:09:32.275682 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4bmzv" event={"ID":"8c029ca1-2a2b-4983-855f-a9e6d7a7d306","Type":"ContainerStarted","Data":"2c6c747d473b8809e487f2769d55e305b4ab7e258708b05be6eafd25ad943890"} Dec 05 14:09:33 crc kubenswrapper[4858]: I1205 14:09:33.290247 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4bmzv" event={"ID":"8c029ca1-2a2b-4983-855f-a9e6d7a7d306","Type":"ContainerStarted","Data":"f30eb0411f96373a31aad38c85cb8a89bb020a15fd91cac1d08aba91e4a9159f"} Dec 05 14:09:34 crc kubenswrapper[4858]: I1205 14:09:34.298017 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4bmzv" event={"ID":"8c029ca1-2a2b-4983-855f-a9e6d7a7d306","Type":"ContainerStarted","Data":"e457c7259d15eed239365af87773f64fd2cb9986031977ad59153dc3ba75c672"} Dec 05 14:09:34 crc kubenswrapper[4858]: I1205 14:09:34.299053 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4bmzv" Dec 05 14:09:34 crc kubenswrapper[4858]: I1205 14:09:34.315897 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4bmzv" podStartSLOduration=7.315881245 podStartE2EDuration="7.315881245s" podCreationTimestamp="2025-12-05 14:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:09:34.311146967 +0000 UTC m=+782.858745106" watchObservedRunningTime="2025-12-05 14:09:34.315881245 +0000 UTC m=+782.863479384" Dec 05 14:09:39 crc kubenswrapper[4858]: I1205 14:09:39.357527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" event={"ID":"a181bba4-2682-4d6a-90cc-12bea5e07d34","Type":"ContainerStarted","Data":"969e8c2e7ddb13e90468634a57c35591f35f20b22beb2c8c1cff7bec04df789a"} Dec 05 14:09:39 crc kubenswrapper[4858]: I1205 14:09:39.358257 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:39 crc kubenswrapper[4858]: I1205 14:09:39.360193 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerID="96920b9055e88050993ce6facfc461c5c92046357d5a3cbb87d07df13dd937cd" exitCode=0 Dec 05 14:09:39 crc kubenswrapper[4858]: I1205 14:09:39.360241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerDied","Data":"96920b9055e88050993ce6facfc461c5c92046357d5a3cbb87d07df13dd937cd"} Dec 05 14:09:39 crc kubenswrapper[4858]: I1205 14:09:39.389885 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" podStartSLOduration=3.768993821 podStartE2EDuration="12.38986916s" podCreationTimestamp="2025-12-05 14:09:27 +0000 UTC" firstStartedPulling="2025-12-05 14:09:29.982364132 +0000 UTC m=+778.529962271" lastFinishedPulling="2025-12-05 14:09:38.603239471 +0000 UTC m=+787.150837610" observedRunningTime="2025-12-05 14:09:39.388734059 +0000 UTC m=+787.936332198" watchObservedRunningTime="2025-12-05 14:09:39.38986916 +0000 UTC m=+787.937467299" Dec 05 14:09:40 crc kubenswrapper[4858]: I1205 14:09:40.366736 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerID="0eefd1c7dd8cda90e129713399ca02ea1aa907cf917715e29cd1c8e5672a1048" exitCode=0 Dec 05 14:09:40 crc kubenswrapper[4858]: I1205 14:09:40.366837 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerDied","Data":"0eefd1c7dd8cda90e129713399ca02ea1aa907cf917715e29cd1c8e5672a1048"} Dec 05 14:09:41 crc kubenswrapper[4858]: I1205 14:09:41.378040 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerID="187d989905ada934a146fa8ed690f501e493535e78783e22c447bbd6c8acc419" exitCode=0 Dec 05 14:09:41 crc kubenswrapper[4858]: I1205 14:09:41.378120 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerDied","Data":"187d989905ada934a146fa8ed690f501e493535e78783e22c447bbd6c8acc419"} Dec 05 14:09:42 crc kubenswrapper[4858]: I1205 14:09:42.391099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"27b370d90c2fc4ebf8312d7f6c51ed84d811cc18f7367e66b20be561e5eb0d81"} Dec 05 14:09:42 crc kubenswrapper[4858]: I1205 14:09:42.391380 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"fa437117985f51a075e7606c27920f9b18736026cc7581f2ce5d92d424e1e868"} Dec 05 14:09:42 crc kubenswrapper[4858]: I1205 14:09:42.391392 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"93b68b3b966801a07cc7bd68f021a7273b4b10196581d620b852ec9b99b07a1a"} Dec 05 14:09:42 crc kubenswrapper[4858]: I1205 14:09:42.391404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"b69b2320211476ae65efb04f92c88b9a653258a7957b8d14810a2a6bfaa50938"} Dec 05 14:09:42 crc kubenswrapper[4858]: I1205 14:09:42.391414 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"9cc5a7a323c4d6720a3367f590cb74908d059a2882f0aa400437f878cc6aa4a5"} Dec 05 14:09:43 crc kubenswrapper[4858]: I1205 14:09:43.400162 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"b71dab5dfe94cf8970a52c488cd877a4b68759d4d9fa0f71f3c5214dc937364d"} Dec 05 14:09:43 crc kubenswrapper[4858]: I1205 14:09:43.400986 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:43 crc kubenswrapper[4858]: I1205 14:09:43.422444 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-756vt" podStartSLOduration=7.921942771 podStartE2EDuration="16.422423652s" podCreationTimestamp="2025-12-05 14:09:27 +0000 UTC" firstStartedPulling="2025-12-05 14:09:30.088014949 +0000 UTC m=+778.635613088" lastFinishedPulling="2025-12-05 14:09:38.58849583 +0000 UTC m=+787.136093969" observedRunningTime="2025-12-05 14:09:43.418872136 +0000 UTC m=+791.966470295" watchObservedRunningTime="2025-12-05 14:09:43.422423652 +0000 UTC m=+791.970021791" Dec 05 14:09:44 crc kubenswrapper[4858]: I1205 14:09:44.649512 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:44 crc kubenswrapper[4858]: I1205 14:09:44.693429 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-756vt" Dec 05 14:09:44 crc kubenswrapper[4858]: I1205 14:09:44.760220 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:09:44 crc kubenswrapper[4858]: I1205 14:09:44.760287 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:09:48 crc kubenswrapper[4858]: I1205 14:09:48.310265 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-wf646" Dec 05 14:09:49 crc kubenswrapper[4858]: I1205 14:09:49.660095 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" Dec 05 14:09:51 crc kubenswrapper[4858]: I1205 14:09:51.912210 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4bmzv" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.384920 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v4krc"] Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.386110 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.388198 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vlpzq" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.388630 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.388946 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.398528 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v4krc"] Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.513680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8tdz\" (UniqueName: \"kubernetes.io/projected/fca6b70a-032e-4cb2-aa58-463d981a7ca2-kube-api-access-p8tdz\") pod \"openstack-operator-index-v4krc\" (UID: \"fca6b70a-032e-4cb2-aa58-463d981a7ca2\") " pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.615480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8tdz\" (UniqueName: \"kubernetes.io/projected/fca6b70a-032e-4cb2-aa58-463d981a7ca2-kube-api-access-p8tdz\") pod \"openstack-operator-index-v4krc\" (UID: \"fca6b70a-032e-4cb2-aa58-463d981a7ca2\") " pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.638465 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8tdz\" (UniqueName: \"kubernetes.io/projected/fca6b70a-032e-4cb2-aa58-463d981a7ca2-kube-api-access-p8tdz\") pod \"openstack-operator-index-v4krc\" (UID: \"fca6b70a-032e-4cb2-aa58-463d981a7ca2\") " pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.704495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:09:55 crc kubenswrapper[4858]: I1205 14:09:55.938287 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v4krc"] Dec 05 14:09:55 crc kubenswrapper[4858]: W1205 14:09:55.944863 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfca6b70a_032e_4cb2_aa58_463d981a7ca2.slice/crio-7796a757bf81faae1c7d5eacbe3bedaece639848169b3497bd7b63fdf0a1e71a WatchSource:0}: Error finding container 7796a757bf81faae1c7d5eacbe3bedaece639848169b3497bd7b63fdf0a1e71a: Status 404 returned error can't find the container with id 7796a757bf81faae1c7d5eacbe3bedaece639848169b3497bd7b63fdf0a1e71a Dec 05 14:09:56 crc kubenswrapper[4858]: I1205 14:09:56.477990 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v4krc" event={"ID":"fca6b70a-032e-4cb2-aa58-463d981a7ca2","Type":"ContainerStarted","Data":"7796a757bf81faae1c7d5eacbe3bedaece639848169b3497bd7b63fdf0a1e71a"} Dec 05 14:09:59 crc kubenswrapper[4858]: I1205 14:09:59.654323 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-756vt" Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.158703 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v4krc"] Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.567348 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qbj7t"] Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.568645 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.573272 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qbj7t"] Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.686206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzm5w\" (UniqueName: \"kubernetes.io/projected/b87af213-3539-45a1-bbe5-c4fd1161ff1b-kube-api-access-bzm5w\") pod \"openstack-operator-index-qbj7t\" (UID: \"b87af213-3539-45a1-bbe5-c4fd1161ff1b\") " pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.788494 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzm5w\" (UniqueName: \"kubernetes.io/projected/b87af213-3539-45a1-bbe5-c4fd1161ff1b-kube-api-access-bzm5w\") pod \"openstack-operator-index-qbj7t\" (UID: \"b87af213-3539-45a1-bbe5-c4fd1161ff1b\") " pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.808100 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzm5w\" (UniqueName: \"kubernetes.io/projected/b87af213-3539-45a1-bbe5-c4fd1161ff1b-kube-api-access-bzm5w\") pod \"openstack-operator-index-qbj7t\" (UID: \"b87af213-3539-45a1-bbe5-c4fd1161ff1b\") " pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:00 crc kubenswrapper[4858]: I1205 14:10:00.902750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:02 crc kubenswrapper[4858]: I1205 14:10:02.391854 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qbj7t"] Dec 05 14:10:03 crc kubenswrapper[4858]: W1205 14:10:03.401056 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb87af213_3539_45a1_bbe5_c4fd1161ff1b.slice/crio-34429e8e62beec14cb3ed1a693e4b818d8cc48dcc1c6ebb3fdc58df3db76ebb6 WatchSource:0}: Error finding container 34429e8e62beec14cb3ed1a693e4b818d8cc48dcc1c6ebb3fdc58df3db76ebb6: Status 404 returned error can't find the container with id 34429e8e62beec14cb3ed1a693e4b818d8cc48dcc1c6ebb3fdc58df3db76ebb6 Dec 05 14:10:03 crc kubenswrapper[4858]: I1205 14:10:03.519012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qbj7t" event={"ID":"b87af213-3539-45a1-bbe5-c4fd1161ff1b","Type":"ContainerStarted","Data":"34429e8e62beec14cb3ed1a693e4b818d8cc48dcc1c6ebb3fdc58df3db76ebb6"} Dec 05 14:10:04 crc kubenswrapper[4858]: I1205 14:10:04.743668 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v4krc" event={"ID":"fca6b70a-032e-4cb2-aa58-463d981a7ca2","Type":"ContainerStarted","Data":"4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968"} Dec 05 14:10:04 crc kubenswrapper[4858]: I1205 14:10:04.743723 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-v4krc" podUID="fca6b70a-032e-4cb2-aa58-463d981a7ca2" containerName="registry-server" containerID="cri-o://4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968" gracePeriod=2 Dec 05 14:10:04 crc kubenswrapper[4858]: I1205 14:10:04.747424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qbj7t" event={"ID":"b87af213-3539-45a1-bbe5-c4fd1161ff1b","Type":"ContainerStarted","Data":"d7d1506cf1695e486e0b6b1ef5da8bbeaf30c1432c7e7fdc24e6a6a2ae1ee8cd"} Dec 05 14:10:04 crc kubenswrapper[4858]: I1205 14:10:04.761019 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v4krc" podStartSLOduration=2.174658188 podStartE2EDuration="9.760977569s" podCreationTimestamp="2025-12-05 14:09:55 +0000 UTC" firstStartedPulling="2025-12-05 14:09:55.949491252 +0000 UTC m=+804.497089391" lastFinishedPulling="2025-12-05 14:10:03.535810633 +0000 UTC m=+812.083408772" observedRunningTime="2025-12-05 14:10:04.760412494 +0000 UTC m=+813.308010653" watchObservedRunningTime="2025-12-05 14:10:04.760977569 +0000 UTC m=+813.308575708" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.094987 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.116089 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qbj7t" podStartSLOduration=4.9818289799999995 podStartE2EDuration="5.116065063s" podCreationTimestamp="2025-12-05 14:10:00 +0000 UTC" firstStartedPulling="2025-12-05 14:10:03.40338629 +0000 UTC m=+811.950984429" lastFinishedPulling="2025-12-05 14:10:03.537622373 +0000 UTC m=+812.085220512" observedRunningTime="2025-12-05 14:10:04.778239709 +0000 UTC m=+813.325837868" watchObservedRunningTime="2025-12-05 14:10:05.116065063 +0000 UTC m=+813.663663202" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.147292 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8tdz\" (UniqueName: \"kubernetes.io/projected/fca6b70a-032e-4cb2-aa58-463d981a7ca2-kube-api-access-p8tdz\") pod \"fca6b70a-032e-4cb2-aa58-463d981a7ca2\" (UID: \"fca6b70a-032e-4cb2-aa58-463d981a7ca2\") " Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.154163 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca6b70a-032e-4cb2-aa58-463d981a7ca2-kube-api-access-p8tdz" (OuterVolumeSpecName: "kube-api-access-p8tdz") pod "fca6b70a-032e-4cb2-aa58-463d981a7ca2" (UID: "fca6b70a-032e-4cb2-aa58-463d981a7ca2"). InnerVolumeSpecName "kube-api-access-p8tdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.249387 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8tdz\" (UniqueName: \"kubernetes.io/projected/fca6b70a-032e-4cb2-aa58-463d981a7ca2-kube-api-access-p8tdz\") on node \"crc\" DevicePath \"\"" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.756557 4858 generic.go:334] "Generic (PLEG): container finished" podID="fca6b70a-032e-4cb2-aa58-463d981a7ca2" containerID="4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968" exitCode=0 Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.756969 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v4krc" event={"ID":"fca6b70a-032e-4cb2-aa58-463d981a7ca2","Type":"ContainerDied","Data":"4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968"} Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.757015 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v4krc" event={"ID":"fca6b70a-032e-4cb2-aa58-463d981a7ca2","Type":"ContainerDied","Data":"7796a757bf81faae1c7d5eacbe3bedaece639848169b3497bd7b63fdf0a1e71a"} Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.757033 4858 scope.go:117] "RemoveContainer" containerID="4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.757260 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v4krc" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.785511 4858 scope.go:117] "RemoveContainer" containerID="4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968" Dec 05 14:10:05 crc kubenswrapper[4858]: E1205 14:10:05.789041 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968\": container with ID starting with 4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968 not found: ID does not exist" containerID="4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.789113 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968"} err="failed to get container status \"4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968\": rpc error: code = NotFound desc = could not find container \"4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968\": container with ID starting with 4f883d31e4b4072e501d23b0c57db8b498ed628a3cbdb41dfddd4fe3dcc64968 not found: ID does not exist" Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.801710 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v4krc"] Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.806147 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-v4krc"] Dec 05 14:10:05 crc kubenswrapper[4858]: I1205 14:10:05.909307 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fca6b70a-032e-4cb2-aa58-463d981a7ca2" path="/var/lib/kubelet/pods/fca6b70a-032e-4cb2-aa58-463d981a7ca2/volumes" Dec 05 14:10:10 crc kubenswrapper[4858]: I1205 14:10:10.903029 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:10 crc kubenswrapper[4858]: I1205 14:10:10.903421 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:10 crc kubenswrapper[4858]: I1205 14:10:10.930009 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:11 crc kubenswrapper[4858]: I1205 14:10:11.813540 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qbj7t" Dec 05 14:10:14 crc kubenswrapper[4858]: I1205 14:10:14.759987 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:10:14 crc kubenswrapper[4858]: I1205 14:10:14.760210 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.319664 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7"] Dec 05 14:10:18 crc kubenswrapper[4858]: E1205 14:10:18.320231 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca6b70a-032e-4cb2-aa58-463d981a7ca2" containerName="registry-server" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.320248 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca6b70a-032e-4cb2-aa58-463d981a7ca2" containerName="registry-server" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.320407 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca6b70a-032e-4cb2-aa58-463d981a7ca2" containerName="registry-server" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.321443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.326854 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wmcm6" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.329741 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7"] Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.424205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89wf\" (UniqueName: \"kubernetes.io/projected/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-kube-api-access-s89wf\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.424333 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-util\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.424362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-bundle\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.525396 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89wf\" (UniqueName: \"kubernetes.io/projected/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-kube-api-access-s89wf\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.525914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-util\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.526273 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-bundle\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.526484 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-util\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.526671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-bundle\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.546787 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89wf\" (UniqueName: \"kubernetes.io/projected/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-kube-api-access-s89wf\") pod \"54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.636991 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:18 crc kubenswrapper[4858]: I1205 14:10:18.847440 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7"] Dec 05 14:10:19 crc kubenswrapper[4858]: I1205 14:10:19.840513 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerID="fbd85997075957472742befb412e756699ba3340893adafd3abc7295454fb099" exitCode=0 Dec 05 14:10:19 crc kubenswrapper[4858]: I1205 14:10:19.840810 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" event={"ID":"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4","Type":"ContainerDied","Data":"fbd85997075957472742befb412e756699ba3340893adafd3abc7295454fb099"} Dec 05 14:10:19 crc kubenswrapper[4858]: I1205 14:10:19.840907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" event={"ID":"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4","Type":"ContainerStarted","Data":"7a0011228c28da6479324c4571caa3254cd130dc1f8a060e8b587a7bdfefca4a"} Dec 05 14:10:20 crc kubenswrapper[4858]: I1205 14:10:20.849923 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerID="7bfbc6e00abe52e5b95c54d7725f6c8c0b50c9b5709d49af0b48631fd68c174f" exitCode=0 Dec 05 14:10:20 crc kubenswrapper[4858]: I1205 14:10:20.849996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" event={"ID":"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4","Type":"ContainerDied","Data":"7bfbc6e00abe52e5b95c54d7725f6c8c0b50c9b5709d49af0b48631fd68c174f"} Dec 05 14:10:22 crc kubenswrapper[4858]: I1205 14:10:22.864068 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerID="0d052bf2c44225a5b15fd16491deedd68984188b4da4211682c777eadbd015fc" exitCode=0 Dec 05 14:10:22 crc kubenswrapper[4858]: I1205 14:10:22.864131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" event={"ID":"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4","Type":"ContainerDied","Data":"0d052bf2c44225a5b15fd16491deedd68984188b4da4211682c777eadbd015fc"} Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.094950 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.207793 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89wf\" (UniqueName: \"kubernetes.io/projected/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-kube-api-access-s89wf\") pod \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.207944 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-bundle\") pod \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.207984 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-util\") pod \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\" (UID: \"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4\") " Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.208780 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-bundle" (OuterVolumeSpecName: "bundle") pod "ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" (UID: "ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.213482 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-kube-api-access-s89wf" (OuterVolumeSpecName: "kube-api-access-s89wf") pod "ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" (UID: "ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4"). InnerVolumeSpecName "kube-api-access-s89wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.309216 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89wf\" (UniqueName: \"kubernetes.io/projected/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-kube-api-access-s89wf\") on node \"crc\" DevicePath \"\"" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.309249 4858 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.414904 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-util" (OuterVolumeSpecName: "util") pod "ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" (UID: "ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.511104 4858 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4-util\") on node \"crc\" DevicePath \"\"" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.881048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" event={"ID":"ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4","Type":"ContainerDied","Data":"7a0011228c28da6479324c4571caa3254cd130dc1f8a060e8b587a7bdfefca4a"} Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.881114 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a0011228c28da6479324c4571caa3254cd130dc1f8a060e8b587a7bdfefca4a" Dec 05 14:10:24 crc kubenswrapper[4858]: I1205 14:10:24.881125 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/54624d9bc18f8a016377f1f3112fde6fedad9a4c44e3645ca1ebbf86f2w78s7" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.314006 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c"] Dec 05 14:10:30 crc kubenswrapper[4858]: E1205 14:10:30.314852 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="util" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.314871 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="util" Dec 05 14:10:30 crc kubenswrapper[4858]: E1205 14:10:30.314886 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="pull" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.314896 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="pull" Dec 05 14:10:30 crc kubenswrapper[4858]: E1205 14:10:30.314915 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="extract" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.314923 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="extract" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.315054 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2fc0fa-75c9-4cdf-b8d0-7fa0ae1e0ee4" containerName="extract" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.315554 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.319375 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-fn887" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.364372 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c"] Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.385455 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djl8w\" (UniqueName: \"kubernetes.io/projected/68726d0b-bb20-490e-9365-057f17dd745f-kube-api-access-djl8w\") pod \"openstack-operator-controller-operator-76c99b5449-pn96c\" (UID: \"68726d0b-bb20-490e-9365-057f17dd745f\") " pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.487135 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djl8w\" (UniqueName: \"kubernetes.io/projected/68726d0b-bb20-490e-9365-057f17dd745f-kube-api-access-djl8w\") pod \"openstack-operator-controller-operator-76c99b5449-pn96c\" (UID: \"68726d0b-bb20-490e-9365-057f17dd745f\") " pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.505651 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djl8w\" (UniqueName: \"kubernetes.io/projected/68726d0b-bb20-490e-9365-057f17dd745f-kube-api-access-djl8w\") pod \"openstack-operator-controller-operator-76c99b5449-pn96c\" (UID: \"68726d0b-bb20-490e-9365-057f17dd745f\") " pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.632592 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.872646 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c"] Dec 05 14:10:30 crc kubenswrapper[4858]: I1205 14:10:30.916067 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" event={"ID":"68726d0b-bb20-490e-9365-057f17dd745f","Type":"ContainerStarted","Data":"9ac578c368c63095c4b63f8af4849ef44577d3bf0693b15517579be76baef32d"} Dec 05 14:10:37 crc kubenswrapper[4858]: I1205 14:10:37.972380 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" event={"ID":"68726d0b-bb20-490e-9365-057f17dd745f","Type":"ContainerStarted","Data":"ac120585f3090f243cefb57099e60a08f8e3316b45017b0daa35eed6ec155735"} Dec 05 14:10:38 crc kubenswrapper[4858]: I1205 14:10:38.977780 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:10:39 crc kubenswrapper[4858]: I1205 14:10:39.002529 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" podStartSLOduration=2.136177976 podStartE2EDuration="9.002512666s" podCreationTimestamp="2025-12-05 14:10:30 +0000 UTC" firstStartedPulling="2025-12-05 14:10:30.880062583 +0000 UTC m=+839.427660722" lastFinishedPulling="2025-12-05 14:10:37.746397253 +0000 UTC m=+846.293995412" observedRunningTime="2025-12-05 14:10:39.002295522 +0000 UTC m=+847.549893681" watchObservedRunningTime="2025-12-05 14:10:39.002512666 +0000 UTC m=+847.550110805" Dec 05 14:10:44 crc kubenswrapper[4858]: I1205 14:10:44.760406 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:10:44 crc kubenswrapper[4858]: I1205 14:10:44.761372 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:10:44 crc kubenswrapper[4858]: I1205 14:10:44.761454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:10:44 crc kubenswrapper[4858]: I1205 14:10:44.762430 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aeb26ce2f72c5b27c0b5939e948f7b4c1c734a8dc5b04d0306f5422f039d5f18"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:10:44 crc kubenswrapper[4858]: I1205 14:10:44.762510 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://aeb26ce2f72c5b27c0b5939e948f7b4c1c734a8dc5b04d0306f5422f039d5f18" gracePeriod=600 Dec 05 14:10:45 crc kubenswrapper[4858]: I1205 14:10:45.014381 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="aeb26ce2f72c5b27c0b5939e948f7b4c1c734a8dc5b04d0306f5422f039d5f18" exitCode=0 Dec 05 14:10:45 crc kubenswrapper[4858]: I1205 14:10:45.014432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"aeb26ce2f72c5b27c0b5939e948f7b4c1c734a8dc5b04d0306f5422f039d5f18"} Dec 05 14:10:45 crc kubenswrapper[4858]: I1205 14:10:45.014493 4858 scope.go:117] "RemoveContainer" containerID="b223ebad30a2f7caa7c0f9f256f2d9437e338680d956fb743d7b1bcdf70d4a7c" Dec 05 14:10:46 crc kubenswrapper[4858]: I1205 14:10:46.022387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"e5e80f882b080532d912d4ccb8829cb93a92e3352e086e2ac39b582773b7cafa"} Dec 05 14:10:50 crc kubenswrapper[4858]: I1205 14:10:50.635773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-76c99b5449-pn96c" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.230535 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.232383 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.236429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-qwppd" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.247609 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.248711 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.253235 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vfgtp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.255322 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.261637 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.262849 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.266029 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zcr2d" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.298334 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.321400 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.340790 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.341807 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.344206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmr26\" (UniqueName: \"kubernetes.io/projected/263f58fb-a58e-4842-9117-323cef60aae8-kube-api-access-cmr26\") pod \"barbican-operator-controller-manager-7d9dfd778-nz2tl\" (UID: \"263f58fb-a58e-4842-9117-323cef60aae8\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.344354 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.345463 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.355054 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-k8fgb" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.355301 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8l2jm" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.360554 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.369908 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.374991 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.376205 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.379343 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mh79z" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.397777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.432548 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.433518 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.437225 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gtfs8" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.437255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.445533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mshk5\" (UniqueName: \"kubernetes.io/projected/82620a48-19bb-475e-81a4-3721c91bfa64-kube-api-access-mshk5\") pod \"glance-operator-controller-manager-77987cd8cd-nkckp\" (UID: \"82620a48-19bb-475e-81a4-3721c91bfa64\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.445668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8dfz\" (UniqueName: \"kubernetes.io/projected/f46597a6-55e2-49fa-8ee8-6fe7db5be4cb-kube-api-access-j8dfz\") pod \"heat-operator-controller-manager-5f64f6f8bb-92n7j\" (UID: \"f46597a6-55e2-49fa-8ee8-6fe7db5be4cb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.445792 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmr26\" (UniqueName: \"kubernetes.io/projected/263f58fb-a58e-4842-9117-323cef60aae8-kube-api-access-cmr26\") pod \"barbican-operator-controller-manager-7d9dfd778-nz2tl\" (UID: \"263f58fb-a58e-4842-9117-323cef60aae8\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.445892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhhl8\" (UniqueName: \"kubernetes.io/projected/f482f790-9250-42a9-b5a5-e0509b1b0e10-kube-api-access-nhhl8\") pod \"designate-operator-controller-manager-78b4bc895b-jscs5\" (UID: \"f482f790-9250-42a9-b5a5-e0509b1b0e10\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.445964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfms\" (UniqueName: \"kubernetes.io/projected/1b6160ac-d6c8-448d-b849-4b0455cec2c1-kube-api-access-tmfms\") pod \"cinder-operator-controller-manager-859b6ccc6-lst9j\" (UID: \"1b6160ac-d6c8-448d-b849-4b0455cec2c1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.448573 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.449499 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.450989 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lk4sv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.470992 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.485582 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.488939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmr26\" (UniqueName: \"kubernetes.io/projected/263f58fb-a58e-4842-9117-323cef60aae8-kube-api-access-cmr26\") pod \"barbican-operator-controller-manager-7d9dfd778-nz2tl\" (UID: \"263f58fb-a58e-4842-9117-323cef60aae8\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.526383 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.542121 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.554953 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gmkrs" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.565913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjzgc\" (UniqueName: \"kubernetes.io/projected/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-kube-api-access-gjzgc\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mshk5\" (UniqueName: \"kubernetes.io/projected/82620a48-19bb-475e-81a4-3721c91bfa64-kube-api-access-mshk5\") pod \"glance-operator-controller-manager-77987cd8cd-nkckp\" (UID: \"82620a48-19bb-475e-81a4-3721c91bfa64\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566235 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfh5\" (UniqueName: \"kubernetes.io/projected/c4dec80f-540d-4397-bab7-53f3e1739f7b-kube-api-access-xhfh5\") pod \"horizon-operator-controller-manager-68c6d99b8f-bp9v9\" (UID: \"c4dec80f-540d-4397-bab7-53f3e1739f7b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz6tw\" (UniqueName: \"kubernetes.io/projected/c71e1565-e737-42ce-b309-29b487e26853-kube-api-access-rz6tw\") pod \"ironic-operator-controller-manager-6c548fd776-6rlkv\" (UID: \"c71e1565-e737-42ce-b309-29b487e26853\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566331 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8dfz\" (UniqueName: \"kubernetes.io/projected/f46597a6-55e2-49fa-8ee8-6fe7db5be4cb-kube-api-access-j8dfz\") pod \"heat-operator-controller-manager-5f64f6f8bb-92n7j\" (UID: \"f46597a6-55e2-49fa-8ee8-6fe7db5be4cb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566378 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhhl8\" (UniqueName: \"kubernetes.io/projected/f482f790-9250-42a9-b5a5-e0509b1b0e10-kube-api-access-nhhl8\") pod \"designate-operator-controller-manager-78b4bc895b-jscs5\" (UID: \"f482f790-9250-42a9-b5a5-e0509b1b0e10\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.566412 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfms\" (UniqueName: \"kubernetes.io/projected/1b6160ac-d6c8-448d-b849-4b0455cec2c1-kube-api-access-tmfms\") pod \"cinder-operator-controller-manager-859b6ccc6-lst9j\" (UID: \"1b6160ac-d6c8-448d-b849-4b0455cec2c1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.567164 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.609421 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mshk5\" (UniqueName: \"kubernetes.io/projected/82620a48-19bb-475e-81a4-3721c91bfa64-kube-api-access-mshk5\") pod \"glance-operator-controller-manager-77987cd8cd-nkckp\" (UID: \"82620a48-19bb-475e-81a4-3721c91bfa64\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.624390 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.629427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8dfz\" (UniqueName: \"kubernetes.io/projected/f46597a6-55e2-49fa-8ee8-6fe7db5be4cb-kube-api-access-j8dfz\") pod \"heat-operator-controller-manager-5f64f6f8bb-92n7j\" (UID: \"f46597a6-55e2-49fa-8ee8-6fe7db5be4cb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.636479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhhl8\" (UniqueName: \"kubernetes.io/projected/f482f790-9250-42a9-b5a5-e0509b1b0e10-kube-api-access-nhhl8\") pod \"designate-operator-controller-manager-78b4bc895b-jscs5\" (UID: \"f482f790-9250-42a9-b5a5-e0509b1b0e10\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.655972 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.657327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.662015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfms\" (UniqueName: \"kubernetes.io/projected/1b6160ac-d6c8-448d-b849-4b0455cec2c1-kube-api-access-tmfms\") pod \"cinder-operator-controller-manager-859b6ccc6-lst9j\" (UID: \"1b6160ac-d6c8-448d-b849-4b0455cec2c1\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.665231 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-flnnf" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.671160 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjzgc\" (UniqueName: \"kubernetes.io/projected/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-kube-api-access-gjzgc\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.671230 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfh5\" (UniqueName: \"kubernetes.io/projected/c4dec80f-540d-4397-bab7-53f3e1739f7b-kube-api-access-xhfh5\") pod \"horizon-operator-controller-manager-68c6d99b8f-bp9v9\" (UID: \"c4dec80f-540d-4397-bab7-53f3e1739f7b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.671250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz6tw\" (UniqueName: \"kubernetes.io/projected/c71e1565-e737-42ce-b309-29b487e26853-kube-api-access-rz6tw\") pod \"ironic-operator-controller-manager-6c548fd776-6rlkv\" (UID: \"c71e1565-e737-42ce-b309-29b487e26853\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.671292 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xj6\" (UniqueName: \"kubernetes.io/projected/34b5ac68-a347-4e14-b678-371378c55b7a-kube-api-access-c8xj6\") pod \"manila-operator-controller-manager-7c79b5df47-rjkwx\" (UID: \"34b5ac68-a347-4e14-b678-371378c55b7a\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.671316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:17 crc kubenswrapper[4858]: E1205 14:11:17.671435 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:17 crc kubenswrapper[4858]: E1205 14:11:17.671488 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert podName:4c9d3c6a-fda7-468e-9099-5f09c2dbdbed nodeName:}" failed. No retries permitted until 2025-12-05 14:11:18.17147037 +0000 UTC m=+886.719068509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert") pod "infra-operator-controller-manager-57548d458d-t8ww2" (UID: "4c9d3c6a-fda7-468e-9099-5f09c2dbdbed") : secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.671580 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.672595 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.673643 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.679400 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.691082 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-w7rxw" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.706985 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.729081 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.749175 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfh5\" (UniqueName: \"kubernetes.io/projected/c4dec80f-540d-4397-bab7-53f3e1739f7b-kube-api-access-xhfh5\") pod \"horizon-operator-controller-manager-68c6d99b8f-bp9v9\" (UID: \"c4dec80f-540d-4397-bab7-53f3e1739f7b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.752338 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.753240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz6tw\" (UniqueName: \"kubernetes.io/projected/c71e1565-e737-42ce-b309-29b487e26853-kube-api-access-rz6tw\") pod \"ironic-operator-controller-manager-6c548fd776-6rlkv\" (UID: \"c71e1565-e737-42ce-b309-29b487e26853\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.753552 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.774382 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mct2l\" (UniqueName: \"kubernetes.io/projected/a602bef3-00cb-471f-898e-7abcf5d90add-kube-api-access-mct2l\") pod \"mariadb-operator-controller-manager-56bbcc9d85-9wwms\" (UID: \"a602bef3-00cb-471f-898e-7abcf5d90add\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.774475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk5rb\" (UniqueName: \"kubernetes.io/projected/f33ab949-382d-454e-9c4a-6e636a1f4bdc-kube-api-access-hk5rb\") pod \"keystone-operator-controller-manager-7765d96ddf-tfs6p\" (UID: \"f33ab949-382d-454e-9c4a-6e636a1f4bdc\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.774505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xj6\" (UniqueName: \"kubernetes.io/projected/34b5ac68-a347-4e14-b678-371378c55b7a-kube-api-access-c8xj6\") pod \"manila-operator-controller-manager-7c79b5df47-rjkwx\" (UID: \"34b5ac68-a347-4e14-b678-371378c55b7a\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.776876 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.778050 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.781472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjzgc\" (UniqueName: \"kubernetes.io/projected/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-kube-api-access-gjzgc\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.785691 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.792842 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qjvpp" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.793171 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jhh62" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.793518 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.816049 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.826888 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.828258 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.833104 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-tlq5t" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.861241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xj6\" (UniqueName: \"kubernetes.io/projected/34b5ac68-a347-4e14-b678-371378c55b7a-kube-api-access-c8xj6\") pod \"manila-operator-controller-manager-7c79b5df47-rjkwx\" (UID: \"34b5ac68-a347-4e14-b678-371378c55b7a\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.863351 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.869002 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.871728 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.899174 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.875427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mct2l\" (UniqueName: \"kubernetes.io/projected/a602bef3-00cb-471f-898e-7abcf5d90add-kube-api-access-mct2l\") pod \"mariadb-operator-controller-manager-56bbcc9d85-9wwms\" (UID: \"a602bef3-00cb-471f-898e-7abcf5d90add\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.900807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5799\" (UniqueName: \"kubernetes.io/projected/66f3a723-6f38-4b27-9363-bbe77135d954-kube-api-access-v5799\") pod \"nova-operator-controller-manager-697bc559fc-4lcwv\" (UID: \"66f3a723-6f38-4b27-9363-bbe77135d954\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.900901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk5rb\" (UniqueName: \"kubernetes.io/projected/f33ab949-382d-454e-9c4a-6e636a1f4bdc-kube-api-access-hk5rb\") pod \"keystone-operator-controller-manager-7765d96ddf-tfs6p\" (UID: \"f33ab949-382d-454e-9c4a-6e636a1f4bdc\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.901004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxzb\" (UniqueName: \"kubernetes.io/projected/992029c2-7acc-4f87-b054-4a062babc670-kube-api-access-djxzb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-6xnwj\" (UID: \"992029c2-7acc-4f87-b054-4a062babc670\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.894965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.926417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-29ck8" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.933668 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.952389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mct2l\" (UniqueName: \"kubernetes.io/projected/a602bef3-00cb-471f-898e-7abcf5d90add-kube-api-access-mct2l\") pod \"mariadb-operator-controller-manager-56bbcc9d85-9wwms\" (UID: \"a602bef3-00cb-471f-898e-7abcf5d90add\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.970744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.977773 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk5rb\" (UniqueName: \"kubernetes.io/projected/f33ab949-382d-454e-9c4a-6e636a1f4bdc-kube-api-access-hk5rb\") pod \"keystone-operator-controller-manager-7765d96ddf-tfs6p\" (UID: \"f33ab949-382d-454e-9c4a-6e636a1f4bdc\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.979543 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.979736 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-748vt" Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.993121 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh"] Dec 05 14:11:17 crc kubenswrapper[4858]: I1205 14:11:17.995334 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.002672 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxl4s\" (UniqueName: \"kubernetes.io/projected/29cf74b8-eb6d-4655-876e-10e917166426-kube-api-access-fxl4s\") pod \"ovn-operator-controller-manager-b6456fdb6-8tvrh\" (UID: \"29cf74b8-eb6d-4655-876e-10e917166426\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.002722 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxzb\" (UniqueName: \"kubernetes.io/projected/992029c2-7acc-4f87-b054-4a062babc670-kube-api-access-djxzb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-6xnwj\" (UID: \"992029c2-7acc-4f87-b054-4a062babc670\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.002750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfkd\" (UniqueName: \"kubernetes.io/projected/7f9fa0fa-c2f8-4624-849e-088b48b9e71d-kube-api-access-fhfkd\") pod \"octavia-operator-controller-manager-998648c74-tbh8l\" (UID: \"7f9fa0fa-c2f8-4624-849e-088b48b9e71d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.002791 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.002809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxqp\" (UniqueName: \"kubernetes.io/projected/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-kube-api-access-mmxqp\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.002865 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5799\" (UniqueName: \"kubernetes.io/projected/66f3a723-6f38-4b27-9363-bbe77135d954-kube-api-access-v5799\") pod \"nova-operator-controller-manager-697bc559fc-4lcwv\" (UID: \"66f3a723-6f38-4b27-9363-bbe77135d954\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.017454 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.023039 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.024392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.030883 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mthtd" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.092923 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.094449 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.104733 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2j7t9" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.105438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxl4s\" (UniqueName: \"kubernetes.io/projected/29cf74b8-eb6d-4655-876e-10e917166426-kube-api-access-fxl4s\") pod \"ovn-operator-controller-manager-b6456fdb6-8tvrh\" (UID: \"29cf74b8-eb6d-4655-876e-10e917166426\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.105510 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhfkd\" (UniqueName: \"kubernetes.io/projected/7f9fa0fa-c2f8-4624-849e-088b48b9e71d-kube-api-access-fhfkd\") pod \"octavia-operator-controller-manager-998648c74-tbh8l\" (UID: \"7f9fa0fa-c2f8-4624-849e-088b48b9e71d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.105560 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.105589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxqp\" (UniqueName: \"kubernetes.io/projected/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-kube-api-access-mmxqp\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.106271 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.106318 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert podName:19f67bc9-5b77-4904-9aaf-8dbd7877d30d nodeName:}" failed. No retries permitted until 2025-12-05 14:11:18.606300591 +0000 UTC m=+887.153898730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" (UID: "19f67bc9-5b77-4904-9aaf-8dbd7877d30d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.115797 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.149301 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxzb\" (UniqueName: \"kubernetes.io/projected/992029c2-7acc-4f87-b054-4a062babc670-kube-api-access-djxzb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-6xnwj\" (UID: \"992029c2-7acc-4f87-b054-4a062babc670\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.149430 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.160873 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.160923 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5799\" (UniqueName: \"kubernetes.io/projected/66f3a723-6f38-4b27-9363-bbe77135d954-kube-api-access-v5799\") pod \"nova-operator-controller-manager-697bc559fc-4lcwv\" (UID: \"66f3a723-6f38-4b27-9363-bbe77135d954\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.167315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxqp\" (UniqueName: \"kubernetes.io/projected/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-kube-api-access-mmxqp\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.168173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhfkd\" (UniqueName: \"kubernetes.io/projected/7f9fa0fa-c2f8-4624-849e-088b48b9e71d-kube-api-access-fhfkd\") pod \"octavia-operator-controller-manager-998648c74-tbh8l\" (UID: \"7f9fa0fa-c2f8-4624-849e-088b48b9e71d\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.177515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxl4s\" (UniqueName: \"kubernetes.io/projected/29cf74b8-eb6d-4655-876e-10e917166426-kube-api-access-fxl4s\") pod \"ovn-operator-controller-manager-b6456fdb6-8tvrh\" (UID: \"29cf74b8-eb6d-4655-876e-10e917166426\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.193781 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.195076 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.198626 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.202253 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8nm6t" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.206645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skcqk\" (UniqueName: \"kubernetes.io/projected/e033dea2-183c-4853-b77e-e77857882a4d-kube-api-access-skcqk\") pod \"placement-operator-controller-manager-78f8948974-xpqrm\" (UID: \"e033dea2-183c-4853-b77e-e77857882a4d\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.206730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7jw\" (UniqueName: \"kubernetes.io/projected/9f3dcc24-a808-434b-a487-c9a82145bc98-kube-api-access-kn7jw\") pod \"swift-operator-controller-manager-5f8c65bbfc-w4zrw\" (UID: \"9f3dcc24-a808-434b-a487-c9a82145bc98\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.206775 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.206885 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.207673 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert podName:4c9d3c6a-fda7-468e-9099-5f09c2dbdbed nodeName:}" failed. No retries permitted until 2025-12-05 14:11:19.207656078 +0000 UTC m=+887.755254227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert") pod "infra-operator-controller-manager-57548d458d-t8ww2" (UID: "4c9d3c6a-fda7-468e-9099-5f09c2dbdbed") : secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.232909 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.236267 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.307989 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.318119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdm8\" (UniqueName: \"kubernetes.io/projected/59405248-ef7c-4944-a9a4-724e24cf22af-kube-api-access-4pdm8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-c8s9k\" (UID: \"59405248-ef7c-4944-a9a4-724e24cf22af\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.318262 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skcqk\" (UniqueName: \"kubernetes.io/projected/e033dea2-183c-4853-b77e-e77857882a4d-kube-api-access-skcqk\") pod \"placement-operator-controller-manager-78f8948974-xpqrm\" (UID: \"e033dea2-183c-4853-b77e-e77857882a4d\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.318373 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7jw\" (UniqueName: \"kubernetes.io/projected/9f3dcc24-a808-434b-a487-c9a82145bc98-kube-api-access-kn7jw\") pod \"swift-operator-controller-manager-5f8c65bbfc-w4zrw\" (UID: \"9f3dcc24-a808-434b-a487-c9a82145bc98\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.327762 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cfgwt" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.334130 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.347413 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.360880 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.386742 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.399044 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7jw\" (UniqueName: \"kubernetes.io/projected/9f3dcc24-a808-434b-a487-c9a82145bc98-kube-api-access-kn7jw\") pod \"swift-operator-controller-manager-5f8c65bbfc-w4zrw\" (UID: \"9f3dcc24-a808-434b-a487-c9a82145bc98\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.409344 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.412033 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.429983 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pdm8\" (UniqueName: \"kubernetes.io/projected/59405248-ef7c-4944-a9a4-724e24cf22af-kube-api-access-4pdm8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-c8s9k\" (UID: \"59405248-ef7c-4944-a9a4-724e24cf22af\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.430248 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jmb\" (UniqueName: \"kubernetes.io/projected/aa187928-b3b8-40e6-b60b-19d84781e34c-kube-api-access-v9jmb\") pod \"test-operator-controller-manager-5854674fcc-hvgl6\" (UID: \"aa187928-b3b8-40e6-b60b-19d84781e34c\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.432099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skcqk\" (UniqueName: \"kubernetes.io/projected/e033dea2-183c-4853-b77e-e77857882a4d-kube-api-access-skcqk\") pod \"placement-operator-controller-manager-78f8948974-xpqrm\" (UID: \"e033dea2-183c-4853-b77e-e77857882a4d\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.456629 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pdm8\" (UniqueName: \"kubernetes.io/projected/59405248-ef7c-4944-a9a4-724e24cf22af-kube-api-access-4pdm8\") pod \"telemetry-operator-controller-manager-76cc84c6bb-c8s9k\" (UID: \"59405248-ef7c-4944-a9a4-724e24cf22af\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.491495 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.492706 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.499032 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zf9j2" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.516904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.533486 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jmb\" (UniqueName: \"kubernetes.io/projected/aa187928-b3b8-40e6-b60b-19d84781e34c-kube-api-access-v9jmb\") pod \"test-operator-controller-manager-5854674fcc-hvgl6\" (UID: \"aa187928-b3b8-40e6-b60b-19d84781e34c\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.550854 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.563240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jmb\" (UniqueName: \"kubernetes.io/projected/aa187928-b3b8-40e6-b60b-19d84781e34c-kube-api-access-v9jmb\") pod \"test-operator-controller-manager-5854674fcc-hvgl6\" (UID: \"aa187928-b3b8-40e6-b60b-19d84781e34c\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.567410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.597541 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.598459 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.602761 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pjlpk" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.602974 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.603074 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.604919 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.626836 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.634414 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktmq\" (UniqueName: \"kubernetes.io/projected/5401bf83-09b5-464f-b52c-210a3fa92aa1-kube-api-access-qktmq\") pod \"watcher-operator-controller-manager-769dc69bc-rbddp\" (UID: \"5401bf83-09b5-464f-b52c-210a3fa92aa1\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.634480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.634634 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.634677 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert podName:19f67bc9-5b77-4904-9aaf-8dbd7877d30d nodeName:}" failed. No retries permitted until 2025-12-05 14:11:19.634664067 +0000 UTC m=+888.182262206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" (UID: "19f67bc9-5b77-4904-9aaf-8dbd7877d30d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.644083 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.645244 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.658289 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7n4pf" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.664315 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.689696 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl"] Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.733157 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.735266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.735443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kz7q\" (UniqueName: \"kubernetes.io/projected/e4cdac6d-f595-4307-939d-688045771951-kube-api-access-9kz7q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-99hbh\" (UID: \"e4cdac6d-f595-4307-939d-688045771951\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.740128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.740319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qktmq\" (UniqueName: \"kubernetes.io/projected/5401bf83-09b5-464f-b52c-210a3fa92aa1-kube-api-access-qktmq\") pod \"watcher-operator-controller-manager-769dc69bc-rbddp\" (UID: \"5401bf83-09b5-464f-b52c-210a3fa92aa1\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.740428 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gq2\" (UniqueName: \"kubernetes.io/projected/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-kube-api-access-25gq2\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.794481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktmq\" (UniqueName: \"kubernetes.io/projected/5401bf83-09b5-464f-b52c-210a3fa92aa1-kube-api-access-qktmq\") pod \"watcher-operator-controller-manager-769dc69bc-rbddp\" (UID: \"5401bf83-09b5-464f-b52c-210a3fa92aa1\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.852232 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gq2\" (UniqueName: \"kubernetes.io/projected/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-kube-api-access-25gq2\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.852309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.852580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kz7q\" (UniqueName: \"kubernetes.io/projected/e4cdac6d-f595-4307-939d-688045771951-kube-api-access-9kz7q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-99hbh\" (UID: \"e4cdac6d-f595-4307-939d-688045771951\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.852608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.852808 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.852886 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:19.352867601 +0000 UTC m=+887.900465740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "metrics-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.853194 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: E1205 14:11:18.853278 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:19.353254509 +0000 UTC m=+887.900852858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "webhook-server-cert" not found Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.885913 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kz7q\" (UniqueName: \"kubernetes.io/projected/e4cdac6d-f595-4307-939d-688045771951-kube-api-access-9kz7q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-99hbh\" (UID: \"e4cdac6d-f595-4307-939d-688045771951\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" Dec 05 14:11:18 crc kubenswrapper[4858]: I1205 14:11:18.893896 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gq2\" (UniqueName: \"kubernetes.io/projected/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-kube-api-access-25gq2\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.077649 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.126123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.220063 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv"] Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.236010 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp"] Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.248276 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j"] Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.255170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" event={"ID":"263f58fb-a58e-4842-9117-323cef60aae8","Type":"ContainerStarted","Data":"9ed3bf5246e37added444205f426e7db9393f8c2235351e1b44b568e49244651"} Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.258105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.258255 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.258319 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert podName:4c9d3c6a-fda7-468e-9099-5f09c2dbdbed nodeName:}" failed. No retries permitted until 2025-12-05 14:11:21.258298802 +0000 UTC m=+889.805896941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert") pod "infra-operator-controller-manager-57548d458d-t8ww2" (UID: "4c9d3c6a-fda7-468e-9099-5f09c2dbdbed") : secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.361778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.361957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.362227 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.362330 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:20.362288711 +0000 UTC m=+888.909886860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "metrics-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.362591 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.362662 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:20.362644799 +0000 UTC m=+888.910242938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "webhook-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.383419 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5"] Dec 05 14:11:19 crc kubenswrapper[4858]: W1205 14:11:19.386260 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf482f790_9250_42a9_b5a5_e0509b1b0e10.slice/crio-185f6b65d0aa614211fc2345124892c657381d12f78b0616cf8319e501f324e3 WatchSource:0}: Error finding container 185f6b65d0aa614211fc2345124892c657381d12f78b0616cf8319e501f324e3: Status 404 returned error can't find the container with id 185f6b65d0aa614211fc2345124892c657381d12f78b0616cf8319e501f324e3 Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.391984 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j"] Dec 05 14:11:19 crc kubenswrapper[4858]: W1205 14:11:19.401984 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b6160ac_d6c8_448d_b849_4b0455cec2c1.slice/crio-8641ef0b1c8d3765a962a727380fd7f28d6b0dcdcfb5b4e75c7e78554765ef8c WatchSource:0}: Error finding container 8641ef0b1c8d3765a962a727380fd7f28d6b0dcdcfb5b4e75c7e78554765ef8c: Status 404 returned error can't find the container with id 8641ef0b1c8d3765a962a727380fd7f28d6b0dcdcfb5b4e75c7e78554765ef8c Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.669553 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.669720 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: E1205 14:11:19.669769 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert podName:19f67bc9-5b77-4904-9aaf-8dbd7877d30d nodeName:}" failed. No retries permitted until 2025-12-05 14:11:21.669755363 +0000 UTC m=+890.217353502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" (UID: "19f67bc9-5b77-4904-9aaf-8dbd7877d30d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.721300 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv"] Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.804548 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh"] Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.821556 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx"] Dec 05 14:11:19 crc kubenswrapper[4858]: W1205 14:11:19.823782 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b5ac68_a347_4e14_b678_371378c55b7a.slice/crio-4bb61b6e0d53b8314f5d5e08cdaa2a979b392128438272d3930cfe96483e8ef1 WatchSource:0}: Error finding container 4bb61b6e0d53b8314f5d5e08cdaa2a979b392128438272d3930cfe96483e8ef1: Status 404 returned error can't find the container with id 4bb61b6e0d53b8314f5d5e08cdaa2a979b392128438272d3930cfe96483e8ef1 Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.834000 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9"] Dec 05 14:11:19 crc kubenswrapper[4858]: I1205 14:11:19.914942 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj"] Dec 05 14:11:19 crc kubenswrapper[4858]: W1205 14:11:19.931604 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa187928_b3b8_40e6_b60b_19d84781e34c.slice/crio-a9411efb3babe19c7ac0993d5fcd73f52d12843e8ccb613a2ebdb34096b1f290 WatchSource:0}: Error finding container a9411efb3babe19c7ac0993d5fcd73f52d12843e8ccb613a2ebdb34096b1f290: Status 404 returned error can't find the container with id a9411efb3babe19c7ac0993d5fcd73f52d12843e8ccb613a2ebdb34096b1f290 Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.023648 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6"] Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.023724 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw"] Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.023735 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k"] Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.023745 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm"] Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.024367 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mct2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-9wwms_openstack-operators(a602bef3-00cb-471f-898e-7abcf5d90add): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.043242 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p"] Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.056444 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms"] Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.086037 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djxzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-6xnwj_openstack-operators(992029c2-7acc-4f87-b054-4a062babc670): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.090201 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hk5rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-tfs6p_openstack-operators(f33ab949-382d-454e-9c4a-6e636a1f4bdc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.090344 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djxzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-6xnwj_openstack-operators(992029c2-7acc-4f87-b054-4a062babc670): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.100253 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" podUID="992029c2-7acc-4f87-b054-4a062babc670" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.104399 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hk5rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-tfs6p_openstack-operators(f33ab949-382d-454e-9c4a-6e636a1f4bdc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.113218 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" podUID="f33ab949-382d-454e-9c4a-6e636a1f4bdc" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.119449 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l"] Dec 05 14:11:20 crc kubenswrapper[4858]: W1205 14:11:20.130833 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4cdac6d_f595_4307_939d_688045771951.slice/crio-4664e598ba58aab1703d632d4e4943eabb1575bc2da93d554aef060ab1f4e713 WatchSource:0}: Error finding container 4664e598ba58aab1703d632d4e4943eabb1575bc2da93d554aef060ab1f4e713: Status 404 returned error can't find the container with id 4664e598ba58aab1703d632d4e4943eabb1575bc2da93d554aef060ab1f4e713 Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.132097 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh"] Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.137361 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kz7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-99hbh_openstack-operators(e4cdac6d-f595-4307-939d-688045771951): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.147159 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" podUID="e4cdac6d-f595-4307-939d-688045771951" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.168358 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp"] Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.172873 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhfkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tbh8l_openstack-operators(7f9fa0fa-c2f8-4624-849e-088b48b9e71d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.175325 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhfkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-tbh8l_openstack-operators(7f9fa0fa-c2f8-4624-849e-088b48b9e71d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.176491 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" podUID="7f9fa0fa-c2f8-4624-849e-088b48b9e71d" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.208344 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qktmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-rbddp_openstack-operators(5401bf83-09b5-464f-b52c-210a3fa92aa1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.211739 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qktmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-rbddp_openstack-operators(5401bf83-09b5-464f-b52c-210a3fa92aa1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.212900 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podUID="5401bf83-09b5-464f-b52c-210a3fa92aa1" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.284941 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" event={"ID":"e4cdac6d-f595-4307-939d-688045771951","Type":"ContainerStarted","Data":"4664e598ba58aab1703d632d4e4943eabb1575bc2da93d554aef060ab1f4e713"} Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.299256 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" podUID="e4cdac6d-f595-4307-939d-688045771951" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.306154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" event={"ID":"c4dec80f-540d-4397-bab7-53f3e1739f7b","Type":"ContainerStarted","Data":"399d7e51311c1833f423d40feb5ddef6af14c20960201c1f8144ea5de14029d3"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.316746 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" event={"ID":"aa187928-b3b8-40e6-b60b-19d84781e34c","Type":"ContainerStarted","Data":"a9411efb3babe19c7ac0993d5fcd73f52d12843e8ccb613a2ebdb34096b1f290"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.331249 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" event={"ID":"59405248-ef7c-4944-a9a4-724e24cf22af","Type":"ContainerStarted","Data":"3765fd5d8adae13b46778adb7d862b914e3cefa393f33bc7ecdd16d7881b8bf3"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.333166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" event={"ID":"82620a48-19bb-475e-81a4-3721c91bfa64","Type":"ContainerStarted","Data":"e1fadda5ff1ee116d17f09db916f710b7ebcaec8b764f89cd8712d035b66c19e"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.334654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" event={"ID":"992029c2-7acc-4f87-b054-4a062babc670","Type":"ContainerStarted","Data":"2020a48e8d67940937fe2803845581aea91c333422f7d17238b4b2b84e072a97"} Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.349861 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" podUID="992029c2-7acc-4f87-b054-4a062babc670" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.390127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" event={"ID":"f482f790-9250-42a9-b5a5-e0509b1b0e10","Type":"ContainerStarted","Data":"185f6b65d0aa614211fc2345124892c657381d12f78b0616cf8319e501f324e3"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.402095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" event={"ID":"f46597a6-55e2-49fa-8ee8-6fe7db5be4cb","Type":"ContainerStarted","Data":"be1307c40bcf487ae4b3b0385016721cd8260aa86ff58823dd963d48036067bc"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.405244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.405354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.405500 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.405547 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:22.405531155 +0000 UTC m=+890.953129294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "webhook-server-cert" not found Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.406313 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.406342 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:22.406334373 +0000 UTC m=+890.953932512 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "metrics-server-cert" not found Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.422471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" event={"ID":"66f3a723-6f38-4b27-9363-bbe77135d954","Type":"ContainerStarted","Data":"b928b938852e438492c03ce416c007a9d113ee74f3f78ab5d8f535fb082bf534"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.429344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" event={"ID":"29cf74b8-eb6d-4655-876e-10e917166426","Type":"ContainerStarted","Data":"4608884d23dcc78cb560f78f296d4d560d94b1d40fc0a8c4f7693dd55f48813a"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.430354 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" event={"ID":"f33ab949-382d-454e-9c4a-6e636a1f4bdc","Type":"ContainerStarted","Data":"bb6d53ee1e2fbf3710cf975464002d0033100937c7fa5482b62c28d873f76876"} Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.432315 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" podUID="f33ab949-382d-454e-9c4a-6e636a1f4bdc" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.432693 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" event={"ID":"1b6160ac-d6c8-448d-b849-4b0455cec2c1","Type":"ContainerStarted","Data":"8641ef0b1c8d3765a962a727380fd7f28d6b0dcdcfb5b4e75c7e78554765ef8c"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.462188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" event={"ID":"9f3dcc24-a808-434b-a487-c9a82145bc98","Type":"ContainerStarted","Data":"7d04d5468a2d519ec25af2d7e5e87480b3d451581f742384fa7a56aaa4edae6f"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.464178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" event={"ID":"a602bef3-00cb-471f-898e-7abcf5d90add","Type":"ContainerStarted","Data":"5c5de49408ae5afa666ce481710aa002a08ac9058066ea44d7ab10bdaeb72f4b"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.473131 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" event={"ID":"5401bf83-09b5-464f-b52c-210a3fa92aa1","Type":"ContainerStarted","Data":"ca90b647968611246a2f589c70ecea886ce48ef876f73df1f3b26d45f6c59b15"} Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.474536 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podUID="5401bf83-09b5-464f-b52c-210a3fa92aa1" Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.477961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" event={"ID":"c71e1565-e737-42ce-b309-29b487e26853","Type":"ContainerStarted","Data":"74227211511e181c628a731868d0d16c5b9645d573d3cdb67df8730942a338ad"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.483026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" event={"ID":"e033dea2-183c-4853-b77e-e77857882a4d","Type":"ContainerStarted","Data":"1b1d40d8b6505b626bd7075bbb75099d886fd9be309ec3392e63441bf9d0e640"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.483748 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" event={"ID":"34b5ac68-a347-4e14-b678-371378c55b7a","Type":"ContainerStarted","Data":"4bb61b6e0d53b8314f5d5e08cdaa2a979b392128438272d3930cfe96483e8ef1"} Dec 05 14:11:20 crc kubenswrapper[4858]: I1205 14:11:20.516010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" event={"ID":"7f9fa0fa-c2f8-4624-849e-088b48b9e71d","Type":"ContainerStarted","Data":"50a6964d47981e509811cf6da7797d00d411c7817586943794e40117eb6dbe90"} Dec 05 14:11:20 crc kubenswrapper[4858]: E1205 14:11:20.558716 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" podUID="7f9fa0fa-c2f8-4624-849e-088b48b9e71d" Dec 05 14:11:21 crc kubenswrapper[4858]: I1205 14:11:21.319018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.319191 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.319375 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert podName:4c9d3c6a-fda7-468e-9099-5f09c2dbdbed nodeName:}" failed. No retries permitted until 2025-12-05 14:11:25.31923091 +0000 UTC m=+893.866829049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert") pod "infra-operator-controller-manager-57548d458d-t8ww2" (UID: "4c9d3c6a-fda7-468e-9099-5f09c2dbdbed") : secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.557956 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" podUID="e4cdac6d-f595-4307-939d-688045771951" Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.558077 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podUID="5401bf83-09b5-464f-b52c-210a3fa92aa1" Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.558393 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" podUID="7f9fa0fa-c2f8-4624-849e-088b48b9e71d" Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.558862 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" podUID="f33ab949-382d-454e-9c4a-6e636a1f4bdc" Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.559716 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" podUID="992029c2-7acc-4f87-b054-4a062babc670" Dec 05 14:11:21 crc kubenswrapper[4858]: I1205 14:11:21.727094 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.729181 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:21 crc kubenswrapper[4858]: E1205 14:11:21.729241 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert podName:19f67bc9-5b77-4904-9aaf-8dbd7877d30d nodeName:}" failed. No retries permitted until 2025-12-05 14:11:25.729222507 +0000 UTC m=+894.276820646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" (UID: "19f67bc9-5b77-4904-9aaf-8dbd7877d30d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:22 crc kubenswrapper[4858]: I1205 14:11:22.444705 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:22 crc kubenswrapper[4858]: I1205 14:11:22.444804 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:22 crc kubenswrapper[4858]: E1205 14:11:22.444944 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 14:11:22 crc kubenswrapper[4858]: E1205 14:11:22.444998 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:26.444979407 +0000 UTC m=+894.992577546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "webhook-server-cert" not found Dec 05 14:11:22 crc kubenswrapper[4858]: E1205 14:11:22.445050 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 14:11:22 crc kubenswrapper[4858]: E1205 14:11:22.445071 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:26.445064978 +0000 UTC m=+894.992663117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "metrics-server-cert" not found Dec 05 14:11:25 crc kubenswrapper[4858]: I1205 14:11:25.390530 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:25 crc kubenswrapper[4858]: E1205 14:11:25.390695 4858 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:25 crc kubenswrapper[4858]: E1205 14:11:25.391001 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert podName:4c9d3c6a-fda7-468e-9099-5f09c2dbdbed nodeName:}" failed. No retries permitted until 2025-12-05 14:11:33.390980569 +0000 UTC m=+901.938578708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert") pod "infra-operator-controller-manager-57548d458d-t8ww2" (UID: "4c9d3c6a-fda7-468e-9099-5f09c2dbdbed") : secret "infra-operator-webhook-server-cert" not found Dec 05 14:11:25 crc kubenswrapper[4858]: I1205 14:11:25.795187 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:25 crc kubenswrapper[4858]: E1205 14:11:25.795388 4858 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:25 crc kubenswrapper[4858]: E1205 14:11:25.795456 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert podName:19f67bc9-5b77-4904-9aaf-8dbd7877d30d nodeName:}" failed. No retries permitted until 2025-12-05 14:11:33.795436929 +0000 UTC m=+902.343035068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" (UID: "19f67bc9-5b77-4904-9aaf-8dbd7877d30d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 14:11:26 crc kubenswrapper[4858]: I1205 14:11:26.506446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:26 crc kubenswrapper[4858]: I1205 14:11:26.506572 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:26 crc kubenswrapper[4858]: E1205 14:11:26.506760 4858 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 05 14:11:26 crc kubenswrapper[4858]: E1205 14:11:26.506876 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:34.506849698 +0000 UTC m=+903.054447977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "metrics-server-cert" not found Dec 05 14:11:26 crc kubenswrapper[4858]: E1205 14:11:26.507275 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 14:11:26 crc kubenswrapper[4858]: E1205 14:11:26.507323 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:34.507314449 +0000 UTC m=+903.054912578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "webhook-server-cert" not found Dec 05 14:11:33 crc kubenswrapper[4858]: I1205 14:11:33.402573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:33 crc kubenswrapper[4858]: I1205 14:11:33.436620 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9d3c6a-fda7-468e-9099-5f09c2dbdbed-cert\") pod \"infra-operator-controller-manager-57548d458d-t8ww2\" (UID: \"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:33 crc kubenswrapper[4858]: I1205 14:11:33.652806 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gtfs8" Dec 05 14:11:33 crc kubenswrapper[4858]: I1205 14:11:33.661673 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:11:33 crc kubenswrapper[4858]: I1205 14:11:33.809129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:33 crc kubenswrapper[4858]: I1205 14:11:33.814945 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/19f67bc9-5b77-4904-9aaf-8dbd7877d30d-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8\" (UID: \"19f67bc9-5b77-4904-9aaf-8dbd7877d30d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:34 crc kubenswrapper[4858]: E1205 14:11:34.040069 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 05 14:11:34 crc kubenswrapper[4858]: E1205 14:11:34.040219 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxl4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-8tvrh_openstack-operators(29cf74b8-eb6d-4655-876e-10e917166426): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:34 crc kubenswrapper[4858]: I1205 14:11:34.098916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-748vt" Dec 05 14:11:34 crc kubenswrapper[4858]: I1205 14:11:34.107955 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:11:34 crc kubenswrapper[4858]: I1205 14:11:34.519294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:34 crc kubenswrapper[4858]: I1205 14:11:34.519614 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:34 crc kubenswrapper[4858]: E1205 14:11:34.519455 4858 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 14:11:34 crc kubenswrapper[4858]: E1205 14:11:34.519714 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs podName:ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456 nodeName:}" failed. No retries permitted until 2025-12-05 14:11:50.519698014 +0000 UTC m=+919.067296153 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs") pod "openstack-operator-controller-manager-7688b5f8b9-9sgf5" (UID: "ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456") : secret "webhook-server-cert" not found Dec 05 14:11:34 crc kubenswrapper[4858]: I1205 14:11:34.526048 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-metrics-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:35 crc kubenswrapper[4858]: E1205 14:11:35.964446 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 05 14:11:35 crc kubenswrapper[4858]: E1205 14:11:35.965247 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rz6tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-6rlkv_openstack-operators(c71e1565-e737-42ce-b309-29b487e26853): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.389563 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxpj"] Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.391192 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.403768 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxpj"] Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.559762 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-catalog-content\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.559889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbj9\" (UniqueName: \"kubernetes.io/projected/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-kube-api-access-5jbj9\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.559945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-utilities\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.660806 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jbj9\" (UniqueName: \"kubernetes.io/projected/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-kube-api-access-5jbj9\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.660893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-utilities\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.660955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-catalog-content\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.661415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-catalog-content\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.661494 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-utilities\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.682254 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jbj9\" (UniqueName: \"kubernetes.io/projected/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-kube-api-access-5jbj9\") pod \"redhat-marketplace-lsxpj\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: I1205 14:11:37.710101 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:11:37 crc kubenswrapper[4858]: E1205 14:11:37.862071 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 05 14:11:37 crc kubenswrapper[4858]: E1205 14:11:37.862257 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhfh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-bp9v9_openstack-operators(c4dec80f-540d-4397-bab7-53f3e1739f7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:42 crc kubenswrapper[4858]: E1205 14:11:42.289740 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 05 14:11:42 crc kubenswrapper[4858]: E1205 14:11:42.290331 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mshk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-nkckp_openstack-operators(82620a48-19bb-475e-81a4-3721c91bfa64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:42 crc kubenswrapper[4858]: E1205 14:11:42.884855 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 05 14:11:42 crc kubenswrapper[4858]: E1205 14:11:42.886128 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v9jmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-hvgl6_openstack-operators(aa187928-b3b8-40e6-b60b-19d84781e34c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:43 crc kubenswrapper[4858]: E1205 14:11:43.534794 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 05 14:11:43 crc kubenswrapper[4858]: E1205 14:11:43.534998 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kn7jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-w4zrw_openstack-operators(9f3dcc24-a808-434b-a487-c9a82145bc98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:43 crc kubenswrapper[4858]: I1205 14:11:43.538833 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:11:46 crc kubenswrapper[4858]: E1205 14:11:46.181162 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Dec 05 14:11:46 crc kubenswrapper[4858]: E1205 14:11:46.181640 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmfms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-lst9j_openstack-operators(1b6160ac-d6c8-448d-b849-4b0455cec2c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:46 crc kubenswrapper[4858]: E1205 14:11:46.700143 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 05 14:11:46 crc kubenswrapper[4858]: E1205 14:11:46.700330 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8dfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-92n7j_openstack-operators(f46597a6-55e2-49fa-8ee8-6fe7db5be4cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:49 crc kubenswrapper[4858]: E1205 14:11:49.356400 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 05 14:11:49 crc kubenswrapper[4858]: E1205 14:11:49.356869 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8xj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-rjkwx_openstack-operators(34b5ac68-a347-4e14-b678-371378c55b7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:49 crc kubenswrapper[4858]: E1205 14:11:49.855481 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 05 14:11:49 crc kubenswrapper[4858]: E1205 14:11:49.855664 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skcqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-xpqrm_openstack-operators(e033dea2-183c-4853-b77e-e77857882a4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:50 crc kubenswrapper[4858]: I1205 14:11:50.557544 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:50 crc kubenswrapper[4858]: I1205 14:11:50.580124 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456-webhook-certs\") pod \"openstack-operator-controller-manager-7688b5f8b9-9sgf5\" (UID: \"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456\") " pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:50 crc kubenswrapper[4858]: I1205 14:11:50.614822 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pjlpk" Dec 05 14:11:50 crc kubenswrapper[4858]: I1205 14:11:50.623177 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:11:54 crc kubenswrapper[4858]: E1205 14:11:54.709625 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 05 14:11:54 crc kubenswrapper[4858]: E1205 14:11:54.710292 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pdm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-c8s9k_openstack-operators(59405248-ef7c-4944-a9a4-724e24cf22af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:55 crc kubenswrapper[4858]: E1205 14:11:55.189018 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 05 14:11:55 crc kubenswrapper[4858]: E1205 14:11:55.189277 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhhl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-jscs5_openstack-operators(f482f790-9250-42a9-b5a5-e0509b1b0e10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.397926 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xpwc4"] Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.400043 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.405246 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpwc4"] Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.433534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-utilities\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.433834 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-catalog-content\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.433970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6mv\" (UniqueName: \"kubernetes.io/projected/fc614f41-81ce-4c6e-b574-f5e562cf95ff-kube-api-access-xg6mv\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.535638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-utilities\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.535706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-catalog-content\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.535766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6mv\" (UniqueName: \"kubernetes.io/projected/fc614f41-81ce-4c6e-b574-f5e562cf95ff-kube-api-access-xg6mv\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.536635 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-utilities\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.536867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-catalog-content\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.555314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6mv\" (UniqueName: \"kubernetes.io/projected/fc614f41-81ce-4c6e-b574-f5e562cf95ff-kube-api-access-xg6mv\") pod \"community-operators-xpwc4\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:55 crc kubenswrapper[4858]: I1205 14:11:55.729102 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.032476 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.032692 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmr26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-nz2tl_openstack-operators(263f58fb-a58e-4842-9117-323cef60aae8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.723709 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.724147 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5799,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-4lcwv_openstack-operators(66f3a723-6f38-4b27-9363-bbe77135d954): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.739378 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.739533 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9kz7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-99hbh_openstack-operators(e4cdac6d-f595-4307-939d-688045771951): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\": context canceled" logger="UnhandledError" Dec 05 14:11:56 crc kubenswrapper[4858]: E1205 14:11:56.740742 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd: Get \\\"https://quay.io/v2/openstack-k8s-operators/rabbitmq-cluster-operator/blobs/sha256:9f4bff248214d12c7254dc3c25ef82bd14ff143e2a06d159f2a8cc1c9e6ef1fd\\\": context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" podUID="e4cdac6d-f595-4307-939d-688045771951" Dec 05 14:11:57 crc kubenswrapper[4858]: I1205 14:11:57.193331 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xpwc4"] Dec 05 14:12:05 crc kubenswrapper[4858]: I1205 14:12:05.860034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerStarted","Data":"1d7edd4cacdddd7dbe4a03493c27570ca8749d79e23b4c451f0c23282c19dc1e"} Dec 05 14:12:06 crc kubenswrapper[4858]: I1205 14:12:06.828270 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8"] Dec 05 14:12:08 crc kubenswrapper[4858]: I1205 14:12:08.881999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" event={"ID":"19f67bc9-5b77-4904-9aaf-8dbd7877d30d","Type":"ContainerStarted","Data":"a5c66784724ddb61d632d198408a6221d1d0de0c27b3e8ca6e9bad8770c52811"} Dec 05 14:12:10 crc kubenswrapper[4858]: I1205 14:12:10.624047 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2"] Dec 05 14:12:10 crc kubenswrapper[4858]: I1205 14:12:10.707310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5"] Dec 05 14:12:10 crc kubenswrapper[4858]: I1205 14:12:10.898740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxpj"] Dec 05 14:12:10 crc kubenswrapper[4858]: E1205 14:12:10.901929 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" podUID="e4cdac6d-f595-4307-939d-688045771951" Dec 05 14:12:14 crc kubenswrapper[4858]: E1205 14:12:14.505804 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 14:12:14 crc kubenswrapper[4858]: E1205 14:12:14.506312 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhfh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-bp9v9_openstack-operators(c4dec80f-540d-4397-bab7-53f3e1739f7b): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 05 14:12:14 crc kubenswrapper[4858]: E1205 14:12:14.507460 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" podUID="c4dec80f-540d-4397-bab7-53f3e1739f7b" Dec 05 14:12:14 crc kubenswrapper[4858]: W1205 14:12:14.518967 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c9d3c6a_fda7_468e_9099_5f09c2dbdbed.slice/crio-168aae690527d3a2803b3a522ab8292f9679723e9afca17b72c93e2315d9055c WatchSource:0}: Error finding container 168aae690527d3a2803b3a522ab8292f9679723e9afca17b72c93e2315d9055c: Status 404 returned error can't find the container with id 168aae690527d3a2803b3a522ab8292f9679723e9afca17b72c93e2315d9055c Dec 05 14:12:14 crc kubenswrapper[4858]: W1205 14:12:14.523196 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ec98fa5_19e3_4584_b2a4_8bd0c6741a01.slice/crio-665a807f634cf7d32b85156276a870444103e96ba4a12a73907bbaae24751cd1 WatchSource:0}: Error finding container 665a807f634cf7d32b85156276a870444103e96ba4a12a73907bbaae24751cd1: Status 404 returned error can't find the container with id 665a807f634cf7d32b85156276a870444103e96ba4a12a73907bbaae24751cd1 Dec 05 14:12:14 crc kubenswrapper[4858]: W1205 14:12:14.533033 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad4a9f4e_080d_43f5_8e3e_6bb24ac1a456.slice/crio-e9071bcfb50e32a538a261d859104c4e9ed5faeb57be43957cdfd91aeb679e69 WatchSource:0}: Error finding container e9071bcfb50e32a538a261d859104c4e9ed5faeb57be43957cdfd91aeb679e69: Status 404 returned error can't find the container with id e9071bcfb50e32a538a261d859104c4e9ed5faeb57be43957cdfd91aeb679e69 Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.709698 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lzjtz"] Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.711181 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.719234 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzjtz"] Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.729355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-catalog-content\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.729400 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-utilities\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.729438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6sg\" (UniqueName: \"kubernetes.io/projected/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-kube-api-access-nn6sg\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.831016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-catalog-content\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.831180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-utilities\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.831291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6sg\" (UniqueName: \"kubernetes.io/projected/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-kube-api-access-nn6sg\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.831709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-catalog-content\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.831780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-utilities\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.856765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6sg\" (UniqueName: \"kubernetes.io/projected/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-kube-api-access-nn6sg\") pod \"certified-operators-lzjtz\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.922833 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" event={"ID":"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456","Type":"ContainerStarted","Data":"e9071bcfb50e32a538a261d859104c4e9ed5faeb57be43957cdfd91aeb679e69"} Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.924533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerStarted","Data":"c071aff45c6fc95f91b541a2c4513fc31aae7b83d3e6e5961b9ae0e51b109a5a"} Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.925523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" event={"ID":"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed","Type":"ContainerStarted","Data":"168aae690527d3a2803b3a522ab8292f9679723e9afca17b72c93e2315d9055c"} Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.926513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxpj" event={"ID":"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01","Type":"ContainerStarted","Data":"665a807f634cf7d32b85156276a870444103e96ba4a12a73907bbaae24751cd1"} Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.927987 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" event={"ID":"f33ab949-382d-454e-9c4a-6e636a1f4bdc","Type":"ContainerStarted","Data":"42dd73495dda6e73a8debc04b520488c2815e065026a5fac1239038cbf8d4f26"} Dec 05 14:12:14 crc kubenswrapper[4858]: I1205 14:12:14.930091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" event={"ID":"7f9fa0fa-c2f8-4624-849e-088b48b9e71d","Type":"ContainerStarted","Data":"049397a1fb7676b7433ab637a669d0c228fddb539ca582dba5ab0c0ebb17a0cb"} Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.032960 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.939184 4858 generic.go:334] "Generic (PLEG): container finished" podID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerID="c071aff45c6fc95f91b541a2c4513fc31aae7b83d3e6e5961b9ae0e51b109a5a" exitCode=0 Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.939227 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerDied","Data":"c071aff45c6fc95f91b541a2c4513fc31aae7b83d3e6e5961b9ae0e51b109a5a"} Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.941002 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerID="bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde" exitCode=0 Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.941055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxpj" event={"ID":"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01","Type":"ContainerDied","Data":"bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde"} Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.943057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" event={"ID":"5401bf83-09b5-464f-b52c-210a3fa92aa1","Type":"ContainerStarted","Data":"1b2a56991f1e2dc8663ea2d92299288b2e1bff4f2716fe86dd76fd0ccd8c996c"} Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.944683 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" event={"ID":"992029c2-7acc-4f87-b054-4a062babc670","Type":"ContainerStarted","Data":"522e88cde53441f13d7b452d626816da3a17cd6d2578daf84fdec608148efc5c"} Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.946112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" event={"ID":"ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456","Type":"ContainerStarted","Data":"1d596fe54b82d55e47cc8c1457b7a48b0cd48e8b7c091ae1aa470e0562964f16"} Dec 05 14:12:15 crc kubenswrapper[4858]: I1205 14:12:15.946551 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:12:16 crc kubenswrapper[4858]: E1205 14:12:16.036588 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" podUID="34b5ac68-a347-4e14-b678-371378c55b7a" Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.100193 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lzjtz"] Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.154894 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" podStartSLOduration=58.15487732 podStartE2EDuration="58.15487732s" podCreationTimestamp="2025-12-05 14:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:12:16.087600849 +0000 UTC m=+944.635198988" watchObservedRunningTime="2025-12-05 14:12:16.15487732 +0000 UTC m=+944.702475459" Dec 05 14:12:16 crc kubenswrapper[4858]: E1205 14:12:16.496449 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" podUID="1b6160ac-d6c8-448d-b849-4b0455cec2c1" Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.952297 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" event={"ID":"f33ab949-382d-454e-9c4a-6e636a1f4bdc","Type":"ContainerStarted","Data":"3a94814c213149b6deffa8789533db109e5e1cb6e6cd97fdebd06aa545de23d4"} Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.953405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" event={"ID":"1b6160ac-d6c8-448d-b849-4b0455cec2c1","Type":"ContainerStarted","Data":"7cfb9cafacb749ef41931bdca9699395a380c009b289f1b3d6da16392240f121"} Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.956077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" event={"ID":"5401bf83-09b5-464f-b52c-210a3fa92aa1","Type":"ContainerStarted","Data":"e04164687a93e4e80ac7e515bd408e881e863d2d4f3163a456e626962afa4a93"} Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.957191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" event={"ID":"34b5ac68-a347-4e14-b678-371378c55b7a","Type":"ContainerStarted","Data":"07fb3ebc746896754f352da4083077a4eddb726df554f2b3c9c54d144d68d687"} Dec 05 14:12:16 crc kubenswrapper[4858]: I1205 14:12:16.959046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerStarted","Data":"fe400637872d6082a9e9d906728afe133ac2af13e810374e7ca2f835610d716f"} Dec 05 14:12:16 crc kubenswrapper[4858]: E1205 14:12:16.999437 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" podUID="c71e1565-e737-42ce-b309-29b487e26853" Dec 05 14:12:17 crc kubenswrapper[4858]: E1205 14:12:17.007730 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" podUID="66f3a723-6f38-4b27-9363-bbe77135d954" Dec 05 14:12:17 crc kubenswrapper[4858]: E1205 14:12:17.866790 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" podUID="e033dea2-183c-4853-b77e-e77857882a4d" Dec 05 14:12:17 crc kubenswrapper[4858]: E1205 14:12:17.882080 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" podUID="59405248-ef7c-4944-a9a4-724e24cf22af" Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.972640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" event={"ID":"7f9fa0fa-c2f8-4624-849e-088b48b9e71d","Type":"ContainerStarted","Data":"4a781739a77d96d79644467b8efa22e1ab03d4b3ddb15b612267ecaaaa73c002"} Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.973695 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.977771 4858 generic.go:334] "Generic (PLEG): container finished" podID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerID="4add2b32b4d739aa114d7ae93e2687daffac25a107e9c0d904f0f4385ae612af" exitCode=0 Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.978171 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerDied","Data":"4add2b32b4d739aa114d7ae93e2687daffac25a107e9c0d904f0f4385ae612af"} Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.981474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" event={"ID":"c71e1565-e737-42ce-b309-29b487e26853","Type":"ContainerStarted","Data":"3df2008374f5db43c167f09738ffddd56d7bd253ff94f3dd350a7853320dcbe2"} Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.988765 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" event={"ID":"c4dec80f-540d-4397-bab7-53f3e1739f7b","Type":"ContainerStarted","Data":"2a4a3bb15400038ae83ac6a0c98a85df4e8e2f25c2d16e049153c19bce527085"} Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.988801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" event={"ID":"c4dec80f-540d-4397-bab7-53f3e1739f7b","Type":"ContainerStarted","Data":"2298e115431fd33c7b76eb3d3e718b01c1f9310182d078bb816f5488bf1b9a1a"} Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.989429 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.995007 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" event={"ID":"66f3a723-6f38-4b27-9363-bbe77135d954","Type":"ContainerStarted","Data":"a56041b61ceae20c3113c536c1337d2236e18e3efcb84dd33547011b7d626345"} Dec 05 14:12:17 crc kubenswrapper[4858]: I1205 14:12:17.996043 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" event={"ID":"59405248-ef7c-4944-a9a4-724e24cf22af","Type":"ContainerStarted","Data":"a7e9a8519e72e050f97ebc8a76279e99763abbe76409df241f97a60bca60ebbe"} Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.005575 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" podStartSLOduration=10.662362112 podStartE2EDuration="1m1.005554548s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.172723235 +0000 UTC m=+888.720321374" lastFinishedPulling="2025-12-05 14:12:10.515915671 +0000 UTC m=+939.063513810" observedRunningTime="2025-12-05 14:12:17.993453826 +0000 UTC m=+946.541051965" watchObservedRunningTime="2025-12-05 14:12:18.005554548 +0000 UTC m=+946.553152687" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.006224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" event={"ID":"e033dea2-183c-4853-b77e-e77857882a4d","Type":"ContainerStarted","Data":"32b1c902f1852a0932c73b047f806ea82aa4a6877805972ecfdc0c03e66fba90"} Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.024968 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" event={"ID":"992029c2-7acc-4f87-b054-4a062babc670","Type":"ContainerStarted","Data":"e055a19928cf0d315581ab58ac02248ebce00513f10893739707ed9ac52650be"} Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.036967 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.037111 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.037235 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.045319 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" podStartSLOduration=4.921679668 podStartE2EDuration="1m1.042681869s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.914177771 +0000 UTC m=+888.461775910" lastFinishedPulling="2025-12-05 14:12:16.035179962 +0000 UTC m=+944.582778111" observedRunningTime="2025-12-05 14:12:18.032571294 +0000 UTC m=+946.580169433" watchObservedRunningTime="2025-12-05 14:12:18.042681869 +0000 UTC m=+946.590280008" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.154577 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" podStartSLOduration=11.064161773 podStartE2EDuration="1m1.154557824s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.089993156 +0000 UTC m=+888.637591295" lastFinishedPulling="2025-12-05 14:12:10.180389197 +0000 UTC m=+938.727987346" observedRunningTime="2025-12-05 14:12:18.150308626 +0000 UTC m=+946.697906765" watchObservedRunningTime="2025-12-05 14:12:18.154557824 +0000 UTC m=+946.702155973" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.173582 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podStartSLOduration=10.04136648 podStartE2EDuration="1m0.173564226s" podCreationTimestamp="2025-12-05 14:11:18 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.20808089 +0000 UTC m=+888.755679029" lastFinishedPulling="2025-12-05 14:12:10.340278636 +0000 UTC m=+938.887876775" observedRunningTime="2025-12-05 14:12:18.171941637 +0000 UTC m=+946.719539776" watchObservedRunningTime="2025-12-05 14:12:18.173564226 +0000 UTC m=+946.721162375" Dec 05 14:12:18 crc kubenswrapper[4858]: I1205 14:12:18.213615 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" podStartSLOduration=6.792888047 podStartE2EDuration="1m1.213593864s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.0858496 +0000 UTC m=+888.633447739" lastFinishedPulling="2025-12-05 14:12:14.506555397 +0000 UTC m=+943.054153556" observedRunningTime="2025-12-05 14:12:18.20565579 +0000 UTC m=+946.753253949" watchObservedRunningTime="2025-12-05 14:12:18.213593864 +0000 UTC m=+946.761192003" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.220061 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" podUID="9f3dcc24-a808-434b-a487-c9a82145bc98" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.265463 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" podUID="29cf74b8-eb6d-4655-876e-10e917166426" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.266036 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" podUID="263f58fb-a58e-4842-9117-323cef60aae8" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.624951 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.625358 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mct2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-9wwms_openstack-operators(a602bef3-00cb-471f-898e-7abcf5d90add): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.626901 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" podUID="a602bef3-00cb-471f-898e-7abcf5d90add" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.665252 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" podUID="82620a48-19bb-475e-81a4-3721c91bfa64" Dec 05 14:12:18 crc kubenswrapper[4858]: E1205 14:12:18.991149 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" podUID="f46597a6-55e2-49fa-8ee8-6fe7db5be4cb" Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.070022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" event={"ID":"263f58fb-a58e-4842-9117-323cef60aae8","Type":"ContainerStarted","Data":"d2a46e07dc84318b62fa44350a0020985d4817c39dbfca40d9ff8645ac422be7"} Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.090699 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.109993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" event={"ID":"82620a48-19bb-475e-81a4-3721c91bfa64","Type":"ContainerStarted","Data":"88ae7805ca2664408ed333da2375cbe6eee706def6d2e0ec5b329ca370a286e1"} Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.117328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" event={"ID":"9f3dcc24-a808-434b-a487-c9a82145bc98","Type":"ContainerStarted","Data":"223232f005c1dc9b00e6582187ad5f4cd7afada6a6b68900a8b8923533decd64"} Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.155333 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" podStartSLOduration=3.5404440360000002 podStartE2EDuration="1m2.155319264s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.024153177 +0000 UTC m=+888.571751316" lastFinishedPulling="2025-12-05 14:12:18.639028405 +0000 UTC m=+947.186626544" observedRunningTime="2025-12-05 14:12:19.15432627 +0000 UTC m=+947.701924409" watchObservedRunningTime="2025-12-05 14:12:19.155319264 +0000 UTC m=+947.702917403" Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.162114 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerID="dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c" exitCode=0 Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.162184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxpj" event={"ID":"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01","Type":"ContainerDied","Data":"dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c"} Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.178556 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" event={"ID":"f46597a6-55e2-49fa-8ee8-6fe7db5be4cb","Type":"ContainerStarted","Data":"9087d7da7c85bff3eeb0ae5c0ba060880aa8d88b75b83e71fbe5126dc5b97bf6"} Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.186921 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" event={"ID":"29cf74b8-eb6d-4655-876e-10e917166426","Type":"ContainerStarted","Data":"df1dc1cf8e4dcad72e511bf5a51ad09a9ea34207cf5baa1944745b61144fdcd3"} Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.191330 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" Dec 05 14:12:19 crc kubenswrapper[4858]: I1205 14:12:19.191909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" Dec 05 14:12:19 crc kubenswrapper[4858]: E1205 14:12:19.410630 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" podUID="f482f790-9250-42a9-b5a5-e0509b1b0e10" Dec 05 14:12:19 crc kubenswrapper[4858]: E1205 14:12:19.655363 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" podUID="aa187928-b3b8-40e6-b60b-19d84781e34c" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.199317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" event={"ID":"1b6160ac-d6c8-448d-b849-4b0455cec2c1","Type":"ContainerStarted","Data":"cb71a5090abe388f4f1f494acd407a64f6cf43a96a9054655faa166ef7d43fd3"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.200401 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.203999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" event={"ID":"f482f790-9250-42a9-b5a5-e0509b1b0e10","Type":"ContainerStarted","Data":"993e1a4626d700fe79f96820ae08cc9e06d6df9806d5ea21013f10db125cb6a1"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.210325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" event={"ID":"e033dea2-183c-4853-b77e-e77857882a4d","Type":"ContainerStarted","Data":"b59bb3e6abf8b72045f5fa649f37c1a0c37adaf8f3d7ace787d0e6c0fa8a9bd9"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.210476 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.227769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" event={"ID":"34b5ac68-a347-4e14-b678-371378c55b7a","Type":"ContainerStarted","Data":"c59b9f35b6f13915ca8fff151a8725a6a7bdcda3835425630c2ba25583471d77"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.228443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.231515 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerStarted","Data":"899e919be07ef8bc095d2ddb3951e201bce5d4ede039e8a5ab0c9c4f2a9d5432"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.251271 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerStarted","Data":"b6694b45b131a7bd0549a432096392b930e6c5d5043919df41d857f6ae00abb9"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.267312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" event={"ID":"c71e1565-e737-42ce-b309-29b487e26853","Type":"ContainerStarted","Data":"6486f14fdaa88d477d630e53a67a5bd03a20d1859fe4f86fdd6d8d1365c84727"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.268415 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.284411 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" event={"ID":"66f3a723-6f38-4b27-9363-bbe77135d954","Type":"ContainerStarted","Data":"195d5541cca9602188bac24ace8dcb723adfcd08292eb0c2d3136dc374507ac7"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.285957 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.316564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" event={"ID":"aa187928-b3b8-40e6-b60b-19d84781e34c","Type":"ContainerStarted","Data":"a15fabe2972c56cfb39019afbb016587e7f4f2904ce4f24c57d552f27f89e9b2"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.327074 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" event={"ID":"59405248-ef7c-4944-a9a4-724e24cf22af","Type":"ContainerStarted","Data":"4306249f417d2392fa4d8a5d937b55ba3353c04aab17cedd7d1ddb833147cc61"} Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.441228 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" podStartSLOduration=4.006062233 podStartE2EDuration="1m3.441209247s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.404548986 +0000 UTC m=+887.952147125" lastFinishedPulling="2025-12-05 14:12:18.839696 +0000 UTC m=+947.387294139" observedRunningTime="2025-12-05 14:12:20.418800227 +0000 UTC m=+948.966398396" watchObservedRunningTime="2025-12-05 14:12:20.441209247 +0000 UTC m=+948.988807386" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.642798 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.718403 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" podStartSLOduration=4.748052051 podStartE2EDuration="1m3.718384708s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.830178844 +0000 UTC m=+888.377776983" lastFinishedPulling="2025-12-05 14:12:18.800511501 +0000 UTC m=+947.348109640" observedRunningTime="2025-12-05 14:12:20.714840316 +0000 UTC m=+949.262438455" watchObservedRunningTime="2025-12-05 14:12:20.718384708 +0000 UTC m=+949.265982847" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.927885 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" podStartSLOduration=5.175468551 podStartE2EDuration="1m3.927865039s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.023915822 +0000 UTC m=+888.571513961" lastFinishedPulling="2025-12-05 14:12:18.77631231 +0000 UTC m=+947.323910449" observedRunningTime="2025-12-05 14:12:20.923989369 +0000 UTC m=+949.471587508" watchObservedRunningTime="2025-12-05 14:12:20.927865039 +0000 UTC m=+949.475463178" Dec 05 14:12:20 crc kubenswrapper[4858]: I1205 14:12:20.961938 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" podStartSLOduration=4.374585409 podStartE2EDuration="1m3.961919328s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.275083519 +0000 UTC m=+887.822681668" lastFinishedPulling="2025-12-05 14:12:18.862417448 +0000 UTC m=+947.410015587" observedRunningTime="2025-12-05 14:12:20.960670259 +0000 UTC m=+949.508268408" watchObservedRunningTime="2025-12-05 14:12:20.961919328 +0000 UTC m=+949.509517467" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.035581 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" podStartSLOduration=4.981690593 podStartE2EDuration="1m4.035566218s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.747070707 +0000 UTC m=+888.294668846" lastFinishedPulling="2025-12-05 14:12:18.800946332 +0000 UTC m=+947.348544471" observedRunningTime="2025-12-05 14:12:21.033133811 +0000 UTC m=+949.580731950" watchObservedRunningTime="2025-12-05 14:12:21.035566218 +0000 UTC m=+949.583164357" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.348393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" event={"ID":"82620a48-19bb-475e-81a4-3721c91bfa64","Type":"ContainerStarted","Data":"1594da6ca98eb0067d03b3ec51e44a708e3007a0b9b661fc0ae1dfbba177f99b"} Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.349270 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.355212 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" event={"ID":"9f3dcc24-a808-434b-a487-c9a82145bc98","Type":"ContainerStarted","Data":"71a50ba1da2c617909e0db0d09f9f8aa3c4d66f0519d1f60f55f22582987e9d6"} Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.355861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.371979 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" podStartSLOduration=3.6532063040000002 podStartE2EDuration="1m4.371962612s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.275265434 +0000 UTC m=+887.822863573" lastFinishedPulling="2025-12-05 14:12:19.994021742 +0000 UTC m=+948.541619881" observedRunningTime="2025-12-05 14:12:21.369416703 +0000 UTC m=+949.917014842" watchObservedRunningTime="2025-12-05 14:12:21.371962612 +0000 UTC m=+949.919560751" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.381628 4858 generic.go:334] "Generic (PLEG): container finished" podID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerID="899e919be07ef8bc095d2ddb3951e201bce5d4ede039e8a5ab0c9c4f2a9d5432" exitCode=0 Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.381714 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerDied","Data":"899e919be07ef8bc095d2ddb3951e201bce5d4ede039e8a5ab0c9c4f2a9d5432"} Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.393442 4858 generic.go:334] "Generic (PLEG): container finished" podID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerID="b6694b45b131a7bd0549a432096392b930e6c5d5043919df41d857f6ae00abb9" exitCode=0 Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.393499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerDied","Data":"b6694b45b131a7bd0549a432096392b930e6c5d5043919df41d857f6ae00abb9"} Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.398554 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" event={"ID":"263f58fb-a58e-4842-9117-323cef60aae8","Type":"ContainerStarted","Data":"d5c11ed248f1e686acb775933b1f6c094d584cd0388f20196672cf1d3a128e5a"} Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.398582 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.410422 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" podStartSLOduration=3.64365009 podStartE2EDuration="1m4.410404774s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.023740918 +0000 UTC m=+888.571339057" lastFinishedPulling="2025-12-05 14:12:20.790495602 +0000 UTC m=+949.338093741" observedRunningTime="2025-12-05 14:12:21.402146752 +0000 UTC m=+949.949744891" watchObservedRunningTime="2025-12-05 14:12:21.410404774 +0000 UTC m=+949.958002913" Dec 05 14:12:21 crc kubenswrapper[4858]: I1205 14:12:21.455080 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" podStartSLOduration=3.278893891 podStartE2EDuration="1m4.45506284s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:18.820078714 +0000 UTC m=+887.367676843" lastFinishedPulling="2025-12-05 14:12:19.996247653 +0000 UTC m=+948.543845792" observedRunningTime="2025-12-05 14:12:21.45160264 +0000 UTC m=+949.999200779" watchObservedRunningTime="2025-12-05 14:12:21.45506284 +0000 UTC m=+950.002660979" Dec 05 14:12:23 crc kubenswrapper[4858]: I1205 14:12:23.420956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" event={"ID":"29cf74b8-eb6d-4655-876e-10e917166426","Type":"ContainerStarted","Data":"ca6a729f0b6680785c90b7d3f94a35ee5b7092b28af499eeae229d2c71a38920"} Dec 05 14:12:23 crc kubenswrapper[4858]: I1205 14:12:23.421260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:12:23 crc kubenswrapper[4858]: I1205 14:12:23.427791 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" event={"ID":"f482f790-9250-42a9-b5a5-e0509b1b0e10","Type":"ContainerStarted","Data":"3e9538dcd41a5e6cf455e164e2ab24360e0ecf58e345e603c23cd4d714a8ba72"} Dec 05 14:12:23 crc kubenswrapper[4858]: I1205 14:12:23.447728 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" podStartSLOduration=5.137483502 podStartE2EDuration="1m6.447705541s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.928790598 +0000 UTC m=+888.476388737" lastFinishedPulling="2025-12-05 14:12:21.239012637 +0000 UTC m=+949.786610776" observedRunningTime="2025-12-05 14:12:23.44159374 +0000 UTC m=+951.989191889" watchObservedRunningTime="2025-12-05 14:12:23.447705541 +0000 UTC m=+951.995303680" Dec 05 14:12:23 crc kubenswrapper[4858]: I1205 14:12:23.469088 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" podStartSLOduration=4.615675509 podStartE2EDuration="1m6.469067038s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.392105169 +0000 UTC m=+887.939703308" lastFinishedPulling="2025-12-05 14:12:21.245496698 +0000 UTC m=+949.793094837" observedRunningTime="2025-12-05 14:12:23.464678546 +0000 UTC m=+952.012276725" watchObservedRunningTime="2025-12-05 14:12:23.469067038 +0000 UTC m=+952.016665177" Dec 05 14:12:24 crc kubenswrapper[4858]: I1205 14:12:24.434756 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:12:25 crc kubenswrapper[4858]: I1205 14:12:25.446160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxpj" event={"ID":"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01","Type":"ContainerStarted","Data":"3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465"} Dec 05 14:12:25 crc kubenswrapper[4858]: I1205 14:12:25.484661 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lsxpj" podStartSLOduration=41.654783131 podStartE2EDuration="48.484637131s" podCreationTimestamp="2025-12-05 14:11:37 +0000 UTC" firstStartedPulling="2025-12-05 14:12:15.989753629 +0000 UTC m=+944.537351768" lastFinishedPulling="2025-12-05 14:12:22.819607619 +0000 UTC m=+951.367205768" observedRunningTime="2025-12-05 14:12:25.46908665 +0000 UTC m=+954.016684799" watchObservedRunningTime="2025-12-05 14:12:25.484637131 +0000 UTC m=+954.032235280" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.574733 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-nz2tl" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.681612 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nkckp" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.710814 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.710887 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.802106 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.859659 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.866704 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" Dec 05 14:12:27 crc kubenswrapper[4858]: I1205 14:12:27.944357 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.028206 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.031257 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-bp9v9" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.338279 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.354349 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.452124 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.492020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" event={"ID":"aa187928-b3b8-40e6-b60b-19d84781e34c","Type":"ContainerStarted","Data":"c7bc9b7139ef060c986c2908e6ae0c2015c6f7157e32ebd8eff5b4643168eec8"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.492162 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.500868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" event={"ID":"a602bef3-00cb-471f-898e-7abcf5d90add","Type":"ContainerStarted","Data":"8a9ca22799927094980f395754d5872145dfc9f5347813d87b5aeb9e77a9b130"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.500928 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" event={"ID":"a602bef3-00cb-471f-898e-7abcf5d90add","Type":"ContainerStarted","Data":"588b6e0e45effaa6aae2fd806bcb350a338b73ef404aeb0883ea0071673c6cd6"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.501713 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.525965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerStarted","Data":"8e7cadbbcdc6972d9e059d8fd6ec68a65dfa544b89a0f6f98b5d2f94f66febc7"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.532665 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerStarted","Data":"415c921ef45e74e757874a1404657cdf73168526fa276a340350d451e1c99a17"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.536012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" event={"ID":"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed","Type":"ContainerStarted","Data":"0dca32829a3005636452412865415ed464bcc3cabd4ce85eb4c53dd6c22b0c45"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.536041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" event={"ID":"4c9d3c6a-fda7-468e-9099-5f09c2dbdbed","Type":"ContainerStarted","Data":"efe7b46b9bc0527f1a5011e5ce2fa3bee86fa1a24407b7360a5cb82b5e17aa27"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.536673 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.540597 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" event={"ID":"f46597a6-55e2-49fa-8ee8-6fe7db5be4cb","Type":"ContainerStarted","Data":"54003cc10a37e8407dd1886d575ae52faec92d40f289730b33d809df94c5a868"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.541237 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.563101 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.564943 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" event={"ID":"19f67bc9-5b77-4904-9aaf-8dbd7877d30d","Type":"ContainerStarted","Data":"5cf2c4277347d4403ecb59bb38bc03813cfc51f714ae985d99a724e9e233641f"} Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.603609 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-w4zrw" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.607308 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" podStartSLOduration=4.125789996 podStartE2EDuration="1m11.60729008s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.024245509 +0000 UTC m=+888.571843648" lastFinishedPulling="2025-12-05 14:12:27.505745593 +0000 UTC m=+956.053343732" observedRunningTime="2025-12-05 14:12:28.603111693 +0000 UTC m=+957.150709852" watchObservedRunningTime="2025-12-05 14:12:28.60729008 +0000 UTC m=+957.154888219" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.613728 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.672804 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" podStartSLOduration=4.192599651 podStartE2EDuration="1m11.672784779s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.000501433 +0000 UTC m=+888.548099582" lastFinishedPulling="2025-12-05 14:12:27.480686581 +0000 UTC m=+956.028284710" observedRunningTime="2025-12-05 14:12:28.661289973 +0000 UTC m=+957.208888112" watchObservedRunningTime="2025-12-05 14:12:28.672784779 +0000 UTC m=+957.220382918" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.703692 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lzjtz" podStartSLOduration=5.141113784 podStartE2EDuration="14.703672396s" podCreationTimestamp="2025-12-05 14:12:14 +0000 UTC" firstStartedPulling="2025-12-05 14:12:17.979078703 +0000 UTC m=+946.526676842" lastFinishedPulling="2025-12-05 14:12:27.541637315 +0000 UTC m=+956.089235454" observedRunningTime="2025-12-05 14:12:28.700294828 +0000 UTC m=+957.247892967" watchObservedRunningTime="2025-12-05 14:12:28.703672396 +0000 UTC m=+957.251270535" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.756187 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" podStartSLOduration=8.16663846 podStartE2EDuration="1m11.756170824s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:11:19.269581232 +0000 UTC m=+887.817179371" lastFinishedPulling="2025-12-05 14:12:22.859113596 +0000 UTC m=+951.406711735" observedRunningTime="2025-12-05 14:12:28.746872048 +0000 UTC m=+957.294470207" watchObservedRunningTime="2025-12-05 14:12:28.756170824 +0000 UTC m=+957.303768963" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.918565 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" podStartSLOduration=58.940982777 podStartE2EDuration="1m11.918530601s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:12:14.521289149 +0000 UTC m=+943.068887288" lastFinishedPulling="2025-12-05 14:12:27.498836973 +0000 UTC m=+956.046435112" observedRunningTime="2025-12-05 14:12:28.916170446 +0000 UTC m=+957.463768595" watchObservedRunningTime="2025-12-05 14:12:28.918530601 +0000 UTC m=+957.466128740" Dec 05 14:12:28 crc kubenswrapper[4858]: I1205 14:12:28.918980 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xpwc4" podStartSLOduration=22.389624927 podStartE2EDuration="33.918975781s" podCreationTimestamp="2025-12-05 14:11:55 +0000 UTC" firstStartedPulling="2025-12-05 14:12:15.989750968 +0000 UTC m=+944.537349107" lastFinishedPulling="2025-12-05 14:12:27.519101822 +0000 UTC m=+956.066699961" observedRunningTime="2025-12-05 14:12:28.884109442 +0000 UTC m=+957.431707581" watchObservedRunningTime="2025-12-05 14:12:28.918975781 +0000 UTC m=+957.466573920" Dec 05 14:12:29 crc kubenswrapper[4858]: I1205 14:12:29.082468 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" Dec 05 14:12:29 crc kubenswrapper[4858]: I1205 14:12:29.574659 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" event={"ID":"19f67bc9-5b77-4904-9aaf-8dbd7877d30d","Type":"ContainerStarted","Data":"ea96692dd1fdcadb25d4c38ada6d7a2c3def1cdc6b212ad196acebd7c93e0fb4"} Dec 05 14:12:29 crc kubenswrapper[4858]: I1205 14:12:29.605605 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" podStartSLOduration=52.999574033 podStartE2EDuration="1m12.605588262s" podCreationTimestamp="2025-12-05 14:11:17 +0000 UTC" firstStartedPulling="2025-12-05 14:12:07.875168273 +0000 UTC m=+936.422766412" lastFinishedPulling="2025-12-05 14:12:27.481182502 +0000 UTC m=+956.028780641" observedRunningTime="2025-12-05 14:12:29.597209167 +0000 UTC m=+958.144807306" watchObservedRunningTime="2025-12-05 14:12:29.605588262 +0000 UTC m=+958.153186401" Dec 05 14:12:30 crc kubenswrapper[4858]: I1205 14:12:30.583555 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:12:33 crc kubenswrapper[4858]: I1205 14:12:33.666883 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" Dec 05 14:12:34 crc kubenswrapper[4858]: I1205 14:12:34.116892 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" Dec 05 14:12:35 crc kubenswrapper[4858]: I1205 14:12:35.033659 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:35 crc kubenswrapper[4858]: I1205 14:12:35.033860 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:35 crc kubenswrapper[4858]: I1205 14:12:35.729318 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:12:35 crc kubenswrapper[4858]: I1205 14:12:35.729387 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:12:36 crc kubenswrapper[4858]: I1205 14:12:36.091178 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lzjtz" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="registry-server" probeResult="failure" output=< Dec 05 14:12:36 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:12:36 crc kubenswrapper[4858]: > Dec 05 14:12:36 crc kubenswrapper[4858]: I1205 14:12:36.772439 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xpwc4" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="registry-server" probeResult="failure" output=< Dec 05 14:12:36 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:12:36 crc kubenswrapper[4858]: > Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.458556 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75gjs"] Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.460481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.474657 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75gjs"] Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.604979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9dw5\" (UniqueName: \"kubernetes.io/projected/7b8fe39c-64da-43dc-ae6c-7d17883a811f-kube-api-access-k9dw5\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.605056 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-utilities\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.605086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-catalog-content\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.630039 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" event={"ID":"e4cdac6d-f595-4307-939d-688045771951","Type":"ContainerStarted","Data":"8ef9e9681e0e9a24099d1722c7b17b40d7a098229bc965bcf70c371b29859f9e"} Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.648055 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-99hbh" podStartSLOduration=2.9968710549999997 podStartE2EDuration="1m19.648037828s" podCreationTimestamp="2025-12-05 14:11:18 +0000 UTC" firstStartedPulling="2025-12-05 14:11:20.137240906 +0000 UTC m=+888.684839045" lastFinishedPulling="2025-12-05 14:12:36.788407679 +0000 UTC m=+965.336005818" observedRunningTime="2025-12-05 14:12:37.644632305 +0000 UTC m=+966.192230444" watchObservedRunningTime="2025-12-05 14:12:37.648037828 +0000 UTC m=+966.195635967" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.681549 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.706781 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9dw5\" (UniqueName: \"kubernetes.io/projected/7b8fe39c-64da-43dc-ae6c-7d17883a811f-kube-api-access-k9dw5\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.706896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-utilities\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.706941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-catalog-content\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.707486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-catalog-content\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.707642 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-utilities\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.750844 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9dw5\" (UniqueName: \"kubernetes.io/projected/7b8fe39c-64da-43dc-ae6c-7d17883a811f-kube-api-access-k9dw5\") pod \"redhat-operators-75gjs\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.774913 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:12:37 crc kubenswrapper[4858]: I1205 14:12:37.780043 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:38 crc kubenswrapper[4858]: I1205 14:12:38.079224 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75gjs"] Dec 05 14:12:38 crc kubenswrapper[4858]: I1205 14:12:38.239405 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-9wwms" Dec 05 14:12:38 crc kubenswrapper[4858]: I1205 14:12:38.637105 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerID="49e0062994158789dc2eeb66d51590ffb72545650d599de3f073dfb76d33663f" exitCode=0 Dec 05 14:12:38 crc kubenswrapper[4858]: I1205 14:12:38.637400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerDied","Data":"49e0062994158789dc2eeb66d51590ffb72545650d599de3f073dfb76d33663f"} Dec 05 14:12:38 crc kubenswrapper[4858]: I1205 14:12:38.637424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerStarted","Data":"ac62137e2e14d7d4e367a4207aac6cb4d4afdd0afda3b8cdcc0ede654f513b70"} Dec 05 14:12:38 crc kubenswrapper[4858]: I1205 14:12:38.735713 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.037527 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxpj"] Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.037812 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lsxpj" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="registry-server" containerID="cri-o://3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465" gracePeriod=2 Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.446036 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.547247 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jbj9\" (UniqueName: \"kubernetes.io/projected/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-kube-api-access-5jbj9\") pod \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.547299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-utilities\") pod \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.547422 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-catalog-content\") pod \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\" (UID: \"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01\") " Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.548732 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-utilities" (OuterVolumeSpecName: "utilities") pod "6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" (UID: "6ec98fa5-19e3-4584-b2a4-8bd0c6741a01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.553025 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-kube-api-access-5jbj9" (OuterVolumeSpecName: "kube-api-access-5jbj9") pod "6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" (UID: "6ec98fa5-19e3-4584-b2a4-8bd0c6741a01"). InnerVolumeSpecName "kube-api-access-5jbj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.569497 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" (UID: "6ec98fa5-19e3-4584-b2a4-8bd0c6741a01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.649023 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.649052 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jbj9\" (UniqueName: \"kubernetes.io/projected/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-kube-api-access-5jbj9\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.649063 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.652933 4858 generic.go:334] "Generic (PLEG): container finished" podID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerID="3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465" exitCode=0 Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.652996 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lsxpj" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.653029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxpj" event={"ID":"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01","Type":"ContainerDied","Data":"3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465"} Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.653070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lsxpj" event={"ID":"6ec98fa5-19e3-4584-b2a4-8bd0c6741a01","Type":"ContainerDied","Data":"665a807f634cf7d32b85156276a870444103e96ba4a12a73907bbaae24751cd1"} Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.653089 4858 scope.go:117] "RemoveContainer" containerID="3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.659989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerStarted","Data":"9b9c85222227d1093a669a2871c99f5bc7aaf8a90a1af81c6f1d35df2cb282dd"} Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.685626 4858 scope.go:117] "RemoveContainer" containerID="dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.708950 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxpj"] Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.709764 4858 scope.go:117] "RemoveContainer" containerID="bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.715631 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lsxpj"] Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.737328 4858 scope.go:117] "RemoveContainer" containerID="3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465" Dec 05 14:12:40 crc kubenswrapper[4858]: E1205 14:12:40.738154 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465\": container with ID starting with 3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465 not found: ID does not exist" containerID="3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.738198 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465"} err="failed to get container status \"3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465\": rpc error: code = NotFound desc = could not find container \"3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465\": container with ID starting with 3522690aee3d622f0bdeb577284f3186933f5fc24e164e50c79b598dd1714465 not found: ID does not exist" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.738225 4858 scope.go:117] "RemoveContainer" containerID="dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c" Dec 05 14:12:40 crc kubenswrapper[4858]: E1205 14:12:40.738639 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c\": container with ID starting with dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c not found: ID does not exist" containerID="dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.738674 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c"} err="failed to get container status \"dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c\": rpc error: code = NotFound desc = could not find container \"dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c\": container with ID starting with dcf363e7561794fa5f980d70cd15d83b99ee2be65e696a9ac4b692188657ec8c not found: ID does not exist" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.738694 4858 scope.go:117] "RemoveContainer" containerID="bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde" Dec 05 14:12:40 crc kubenswrapper[4858]: E1205 14:12:40.739188 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde\": container with ID starting with bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde not found: ID does not exist" containerID="bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde" Dec 05 14:12:40 crc kubenswrapper[4858]: I1205 14:12:40.739314 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde"} err="failed to get container status \"bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde\": rpc error: code = NotFound desc = could not find container \"bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde\": container with ID starting with bd929723bf8ddb5024b15dc4d2805fe0e72a6372b46883b65b02d0ba96479cde not found: ID does not exist" Dec 05 14:12:41 crc kubenswrapper[4858]: I1205 14:12:41.685014 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerID="9b9c85222227d1093a669a2871c99f5bc7aaf8a90a1af81c6f1d35df2cb282dd" exitCode=0 Dec 05 14:12:41 crc kubenswrapper[4858]: I1205 14:12:41.685104 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerDied","Data":"9b9c85222227d1093a669a2871c99f5bc7aaf8a90a1af81c6f1d35df2cb282dd"} Dec 05 14:12:41 crc kubenswrapper[4858]: I1205 14:12:41.908353 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" path="/var/lib/kubelet/pods/6ec98fa5-19e3-4584-b2a4-8bd0c6741a01/volumes" Dec 05 14:12:45 crc kubenswrapper[4858]: I1205 14:12:45.096298 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:45 crc kubenswrapper[4858]: I1205 14:12:45.145351 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:45 crc kubenswrapper[4858]: I1205 14:12:45.770180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:12:45 crc kubenswrapper[4858]: I1205 14:12:45.817061 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:12:46 crc kubenswrapper[4858]: I1205 14:12:46.236189 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzjtz"] Dec 05 14:12:46 crc kubenswrapper[4858]: I1205 14:12:46.723984 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerStarted","Data":"2f06b7a621cf90b14de829b43670f31ae332506f354632390cbee32e37a7ec6b"} Dec 05 14:12:46 crc kubenswrapper[4858]: I1205 14:12:46.724165 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lzjtz" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="registry-server" containerID="cri-o://8e7cadbbcdc6972d9e059d8fd6ec68a65dfa544b89a0f6f98b5d2f94f66febc7" gracePeriod=2 Dec 05 14:12:46 crc kubenswrapper[4858]: I1205 14:12:46.750197 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75gjs" podStartSLOduration=2.781487877 podStartE2EDuration="9.750174127s" podCreationTimestamp="2025-12-05 14:12:37 +0000 UTC" firstStartedPulling="2025-12-05 14:12:38.639246444 +0000 UTC m=+967.186844583" lastFinishedPulling="2025-12-05 14:12:45.607932694 +0000 UTC m=+974.155530833" observedRunningTime="2025-12-05 14:12:46.743796356 +0000 UTC m=+975.291394515" watchObservedRunningTime="2025-12-05 14:12:46.750174127 +0000 UTC m=+975.297772266" Dec 05 14:12:47 crc kubenswrapper[4858]: I1205 14:12:47.781484 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:47 crc kubenswrapper[4858]: I1205 14:12:47.782088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.038412 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xpwc4"] Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.038660 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xpwc4" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="registry-server" containerID="cri-o://415c921ef45e74e757874a1404657cdf73168526fa276a340350d451e1c99a17" gracePeriod=2 Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.738291 4858 generic.go:334] "Generic (PLEG): container finished" podID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerID="8e7cadbbcdc6972d9e059d8fd6ec68a65dfa544b89a0f6f98b5d2f94f66febc7" exitCode=0 Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.738364 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerDied","Data":"8e7cadbbcdc6972d9e059d8fd6ec68a65dfa544b89a0f6f98b5d2f94f66febc7"} Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.740525 4858 generic.go:334] "Generic (PLEG): container finished" podID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerID="415c921ef45e74e757874a1404657cdf73168526fa276a340350d451e1c99a17" exitCode=0 Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.740606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerDied","Data":"415c921ef45e74e757874a1404657cdf73168526fa276a340350d451e1c99a17"} Dec 05 14:12:48 crc kubenswrapper[4858]: I1205 14:12:48.821711 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-75gjs" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="registry-server" probeResult="failure" output=< Dec 05 14:12:48 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:12:48 crc kubenswrapper[4858]: > Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.177688 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.258633 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.273378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6sg\" (UniqueName: \"kubernetes.io/projected/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-kube-api-access-nn6sg\") pod \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.273502 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-catalog-content\") pod \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.273542 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-utilities\") pod \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\" (UID: \"f824b4e5-de86-49b6-a7cc-fa6d34e8498a\") " Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.274643 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-utilities" (OuterVolumeSpecName: "utilities") pod "f824b4e5-de86-49b6-a7cc-fa6d34e8498a" (UID: "f824b4e5-de86-49b6-a7cc-fa6d34e8498a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.293110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-kube-api-access-nn6sg" (OuterVolumeSpecName: "kube-api-access-nn6sg") pod "f824b4e5-de86-49b6-a7cc-fa6d34e8498a" (UID: "f824b4e5-de86-49b6-a7cc-fa6d34e8498a"). InnerVolumeSpecName "kube-api-access-nn6sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.325246 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f824b4e5-de86-49b6-a7cc-fa6d34e8498a" (UID: "f824b4e5-de86-49b6-a7cc-fa6d34e8498a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.374345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-catalog-content\") pod \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.374517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg6mv\" (UniqueName: \"kubernetes.io/projected/fc614f41-81ce-4c6e-b574-f5e562cf95ff-kube-api-access-xg6mv\") pod \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.374543 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-utilities\") pod \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\" (UID: \"fc614f41-81ce-4c6e-b574-f5e562cf95ff\") " Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.374800 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.374816 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.374844 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6sg\" (UniqueName: \"kubernetes.io/projected/f824b4e5-de86-49b6-a7cc-fa6d34e8498a-kube-api-access-nn6sg\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.375540 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-utilities" (OuterVolumeSpecName: "utilities") pod "fc614f41-81ce-4c6e-b574-f5e562cf95ff" (UID: "fc614f41-81ce-4c6e-b574-f5e562cf95ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.378367 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc614f41-81ce-4c6e-b574-f5e562cf95ff-kube-api-access-xg6mv" (OuterVolumeSpecName: "kube-api-access-xg6mv") pod "fc614f41-81ce-4c6e-b574-f5e562cf95ff" (UID: "fc614f41-81ce-4c6e-b574-f5e562cf95ff"). InnerVolumeSpecName "kube-api-access-xg6mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.424274 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc614f41-81ce-4c6e-b574-f5e562cf95ff" (UID: "fc614f41-81ce-4c6e-b574-f5e562cf95ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.475842 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.475886 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc614f41-81ce-4c6e-b574-f5e562cf95ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.475916 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg6mv\" (UniqueName: \"kubernetes.io/projected/fc614f41-81ce-4c6e-b574-f5e562cf95ff-kube-api-access-xg6mv\") on node \"crc\" DevicePath \"\"" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.748806 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lzjtz" event={"ID":"f824b4e5-de86-49b6-a7cc-fa6d34e8498a","Type":"ContainerDied","Data":"fe400637872d6082a9e9d906728afe133ac2af13e810374e7ca2f835610d716f"} Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.748914 4858 scope.go:117] "RemoveContainer" containerID="8e7cadbbcdc6972d9e059d8fd6ec68a65dfa544b89a0f6f98b5d2f94f66febc7" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.749070 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lzjtz" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.752870 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xpwc4" event={"ID":"fc614f41-81ce-4c6e-b574-f5e562cf95ff","Type":"ContainerDied","Data":"1d7edd4cacdddd7dbe4a03493c27570ca8749d79e23b4c451f0c23282c19dc1e"} Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.752954 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xpwc4" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.767543 4858 scope.go:117] "RemoveContainer" containerID="899e919be07ef8bc095d2ddb3951e201bce5d4ede039e8a5ab0c9c4f2a9d5432" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.780133 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lzjtz"] Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.792052 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lzjtz"] Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.798591 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xpwc4"] Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.803027 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xpwc4"] Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.813593 4858 scope.go:117] "RemoveContainer" containerID="4add2b32b4d739aa114d7ae93e2687daffac25a107e9c0d904f0f4385ae612af" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.833865 4858 scope.go:117] "RemoveContainer" containerID="415c921ef45e74e757874a1404657cdf73168526fa276a340350d451e1c99a17" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.866441 4858 scope.go:117] "RemoveContainer" containerID="b6694b45b131a7bd0549a432096392b930e6c5d5043919df41d857f6ae00abb9" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.885547 4858 scope.go:117] "RemoveContainer" containerID="c071aff45c6fc95f91b541a2c4513fc31aae7b83d3e6e5961b9ae0e51b109a5a" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.912066 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" path="/var/lib/kubelet/pods/f824b4e5-de86-49b6-a7cc-fa6d34e8498a/volumes" Dec 05 14:12:49 crc kubenswrapper[4858]: I1205 14:12:49.913040 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" path="/var/lib/kubelet/pods/fc614f41-81ce-4c6e-b574-f5e562cf95ff/volumes" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.791157 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67d95884df-z95jm"] Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796010 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="extract-utilities" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="extract-utilities" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796077 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="extract-utilities" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796084 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="extract-utilities" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796096 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="extract-content" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796104 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="extract-content" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796117 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796123 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796138 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="extract-content" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796145 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="extract-content" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796161 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796167 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796178 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="extract-content" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796184 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="extract-content" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796196 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="extract-utilities" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796201 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="extract-utilities" Dec 05 14:12:56 crc kubenswrapper[4858]: E1205 14:12:56.796214 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796222 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796441 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc614f41-81ce-4c6e-b574-f5e562cf95ff" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796452 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f824b4e5-de86-49b6-a7cc-fa6d34e8498a" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.796465 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec98fa5-19e3-4584-b2a4-8bd0c6741a01" containerName="registry-server" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.797290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.805412 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d95884df-z95jm"] Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.805746 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.806004 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-n8nt6" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.806156 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.813080 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.862237 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65974b8d89-ss8t7"] Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.863718 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.867108 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.881044 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65974b8d89-ss8t7"] Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.917308 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-config\") pod \"dnsmasq-dns-67d95884df-z95jm\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:56 crc kubenswrapper[4858]: I1205 14:12:56.917593 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqj4r\" (UniqueName: \"kubernetes.io/projected/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-kube-api-access-kqj4r\") pod \"dnsmasq-dns-67d95884df-z95jm\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.019190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdnk\" (UniqueName: \"kubernetes.io/projected/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-kube-api-access-fpdnk\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.019253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqj4r\" (UniqueName: \"kubernetes.io/projected/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-kube-api-access-kqj4r\") pod \"dnsmasq-dns-67d95884df-z95jm\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.019302 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-config\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.019355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-dns-svc\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.019377 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-config\") pod \"dnsmasq-dns-67d95884df-z95jm\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.020234 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-config\") pod \"dnsmasq-dns-67d95884df-z95jm\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.046799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqj4r\" (UniqueName: \"kubernetes.io/projected/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-kube-api-access-kqj4r\") pod \"dnsmasq-dns-67d95884df-z95jm\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.119603 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.120525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-config\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.120566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-dns-svc\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.120619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdnk\" (UniqueName: \"kubernetes.io/projected/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-kube-api-access-fpdnk\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.134447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-config\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.134982 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-dns-svc\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.138617 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdnk\" (UniqueName: \"kubernetes.io/projected/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-kube-api-access-fpdnk\") pod \"dnsmasq-dns-65974b8d89-ss8t7\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.179070 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.692101 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67d95884df-z95jm"] Dec 05 14:12:57 crc kubenswrapper[4858]: W1205 14:12:57.696050 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdc96fd8_754e_4f76_8e12_d455a6b35a5a.slice/crio-4622cc3585d88d1d98962cdbb5770e307c7db7e4f5018452258cc6f573732052 WatchSource:0}: Error finding container 4622cc3585d88d1d98962cdbb5770e307c7db7e4f5018452258cc6f573732052: Status 404 returned error can't find the container with id 4622cc3585d88d1d98962cdbb5770e307c7db7e4f5018452258cc6f573732052 Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.777583 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65974b8d89-ss8t7"] Dec 05 14:12:57 crc kubenswrapper[4858]: W1205 14:12:57.788968 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d6ae67d_01f1_4183_a7b7_cd03b427e06a.slice/crio-b0ac0c8a8eb956a2ab8c55c13e807688b70144c50565d7f6d33e6909546205b4 WatchSource:0}: Error finding container b0ac0c8a8eb956a2ab8c55c13e807688b70144c50565d7f6d33e6909546205b4: Status 404 returned error can't find the container with id b0ac0c8a8eb956a2ab8c55c13e807688b70144c50565d7f6d33e6909546205b4 Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.807400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d95884df-z95jm" event={"ID":"bdc96fd8-754e-4f76-8e12-d455a6b35a5a","Type":"ContainerStarted","Data":"4622cc3585d88d1d98962cdbb5770e307c7db7e4f5018452258cc6f573732052"} Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.808523 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" event={"ID":"3d6ae67d-01f1-4183-a7b7-cd03b427e06a","Type":"ContainerStarted","Data":"b0ac0c8a8eb956a2ab8c55c13e807688b70144c50565d7f6d33e6909546205b4"} Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.836931 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:57 crc kubenswrapper[4858]: I1205 14:12:57.890868 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:12:58 crc kubenswrapper[4858]: I1205 14:12:58.070599 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75gjs"] Dec 05 14:12:59 crc kubenswrapper[4858]: I1205 14:12:59.829693 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-75gjs" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="registry-server" containerID="cri-o://2f06b7a621cf90b14de829b43670f31ae332506f354632390cbee32e37a7ec6b" gracePeriod=2 Dec 05 14:12:59 crc kubenswrapper[4858]: I1205 14:12:59.862351 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65974b8d89-ss8t7"] Dec 05 14:12:59 crc kubenswrapper[4858]: I1205 14:12:59.901394 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-784d8f4b89-d828q"] Dec 05 14:12:59 crc kubenswrapper[4858]: I1205 14:12:59.912538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:12:59 crc kubenswrapper[4858]: I1205 14:12:59.939441 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d8f4b89-d828q"] Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.068608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nw6d\" (UniqueName: \"kubernetes.io/projected/d303d608-2c19-47fa-9623-f84b66025548-kube-api-access-8nw6d\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.068661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-config\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.068688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-dns-svc\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.170714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-dns-svc\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.170832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nw6d\" (UniqueName: \"kubernetes.io/projected/d303d608-2c19-47fa-9623-f84b66025548-kube-api-access-8nw6d\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.170866 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-config\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.171696 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-config\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.171878 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-dns-svc\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.215457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nw6d\" (UniqueName: \"kubernetes.io/projected/d303d608-2c19-47fa-9623-f84b66025548-kube-api-access-8nw6d\") pod \"dnsmasq-dns-784d8f4b89-d828q\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.286071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.327841 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d95884df-z95jm"] Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.388758 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d7d7c8dff-98hcf"] Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.389965 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.418233 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7d7c8dff-98hcf"] Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.473578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7lx\" (UniqueName: \"kubernetes.io/projected/6971d622-3415-4baa-88e7-e68b8e2323ae-kube-api-access-zz7lx\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.473634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-dns-svc\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.473717 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-config\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.574654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-config\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.574967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz7lx\" (UniqueName: \"kubernetes.io/projected/6971d622-3415-4baa-88e7-e68b8e2323ae-kube-api-access-zz7lx\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.575010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-dns-svc\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.575872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-dns-svc\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.576389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-config\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.629111 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz7lx\" (UniqueName: \"kubernetes.io/projected/6971d622-3415-4baa-88e7-e68b8e2323ae-kube-api-access-zz7lx\") pod \"dnsmasq-dns-d7d7c8dff-98hcf\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.707290 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.851472 4858 generic.go:334] "Generic (PLEG): container finished" podID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerID="2f06b7a621cf90b14de829b43670f31ae332506f354632390cbee32e37a7ec6b" exitCode=0 Dec 05 14:13:00 crc kubenswrapper[4858]: I1205 14:13:00.851513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerDied","Data":"2f06b7a621cf90b14de829b43670f31ae332506f354632390cbee32e37a7ec6b"} Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.056346 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.057623 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.060665 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.061048 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mws78" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.067629 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.069115 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.070030 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.070143 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.070914 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.099608 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183312 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d99fd616-b195-4da7-b7ac-99bed8479e36-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183376 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183414 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183449 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vg4\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-kube-api-access-88vg4\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-config-data\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183535 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d99fd616-b195-4da7-b7ac-99bed8479e36-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183555 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.183587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.284910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vg4\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-kube-api-access-88vg4\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.284967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-config-data\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285038 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d99fd616-b195-4da7-b7ac-99bed8479e36-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285762 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285882 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d99fd616-b195-4da7-b7ac-99bed8479e36-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285918 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.285969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.286010 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.286061 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.287050 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-config-data\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.287055 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.287368 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.287892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.288033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.290500 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.290748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.293382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d99fd616-b195-4da7-b7ac-99bed8479e36-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.293616 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.301816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vg4\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-kube-api-access-88vg4\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.314776 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.319054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d99fd616-b195-4da7-b7ac-99bed8479e36-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.395417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.490603 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.492698 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.497143 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.497405 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.497661 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.497925 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vvxs4" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.500248 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.501648 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.502147 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.522550 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.595964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzv22\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-kube-api-access-mzv22\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96d65651-be4c-475d-b4dc-293f42b30e39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596077 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596102 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596120 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596169 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596196 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596229 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96d65651-be4c-475d-b4dc-293f42b30e39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.596252 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.697996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698039 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698146 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96d65651-be4c-475d-b4dc-293f42b30e39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698215 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzv22\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-kube-api-access-mzv22\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698236 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698258 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96d65651-be4c-475d-b4dc-293f42b30e39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.698369 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.699382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.700032 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.710036 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96d65651-be4c-475d-b4dc-293f42b30e39-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.710605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.710884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.711858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.712622 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.715970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96d65651-be4c-475d-b4dc-293f42b30e39-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.722612 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.732047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.736077 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzv22\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-kube-api-access-mzv22\") pod \"rabbitmq-cell1-server-0\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:01 crc kubenswrapper[4858]: I1205 14:13:01.818545 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.729510 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.733430 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.736363 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.736564 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.736846 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-76bxq" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.737511 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.737667 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.760902 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-kolla-config\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820610 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgrr\" (UniqueName: \"kubernetes.io/projected/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-kube-api-access-ghgrr\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820682 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-config-data-default\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820750 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820783 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.820871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922142 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922195 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-kolla-config\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922366 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgrr\" (UniqueName: \"kubernetes.io/projected/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-kube-api-access-ghgrr\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.922382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-config-data-default\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.923176 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.923568 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-config-data-default\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.923648 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.924079 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-kolla-config\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.924782 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.928403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.928640 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.946333 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgrr\" (UniqueName: \"kubernetes.io/projected/535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e-kube-api-access-ghgrr\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:02 crc kubenswrapper[4858]: I1205 14:13:02.960114 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e\") " pod="openstack/openstack-galera-0" Dec 05 14:13:03 crc kubenswrapper[4858]: I1205 14:13:03.076508 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.121848 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.124160 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.127291 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.127511 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.128345 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5tbp7" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.128770 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.137447 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.241579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.241693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zhj\" (UniqueName: \"kubernetes.io/projected/709c2e19-3180-41ef-9341-df5e95e1733a-kube-api-access-54zhj\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.241733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c2e19-3180-41ef-9341-df5e95e1733a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.241914 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.242041 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.242086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c2e19-3180-41ef-9341-df5e95e1733a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.242157 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c2e19-3180-41ef-9341-df5e95e1733a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.242185 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c2e19-3180-41ef-9341-df5e95e1733a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343558 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343578 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343605 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c2e19-3180-41ef-9341-df5e95e1733a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zhj\" (UniqueName: \"kubernetes.io/projected/709c2e19-3180-41ef-9341-df5e95e1733a-kube-api-access-54zhj\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343757 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.343778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c2e19-3180-41ef-9341-df5e95e1733a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.348981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c2e19-3180-41ef-9341-df5e95e1733a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.349312 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c2e19-3180-41ef-9341-df5e95e1733a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.352928 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.353757 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.358912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c2e19-3180-41ef-9341-df5e95e1733a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.359814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c2e19-3180-41ef-9341-df5e95e1733a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.359998 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.650700 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zhj\" (UniqueName: \"kubernetes.io/projected/709c2e19-3180-41ef-9341-df5e95e1733a-kube-api-access-54zhj\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.678154 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"709c2e19-3180-41ef-9341-df5e95e1733a\") " pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.806311 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.880532 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.881540 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.888515 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.889166 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.893581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cb42a5-30b0-41d4-ba81-e316df2af14b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.893809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pfnd\" (UniqueName: \"kubernetes.io/projected/89cb42a5-30b0-41d4-ba81-e316df2af14b-kube-api-access-7pfnd\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.893935 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89cb42a5-30b0-41d4-ba81-e316df2af14b-kolla-config\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.894019 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89cb42a5-30b0-41d4-ba81-e316df2af14b-config-data\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.894113 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cb42a5-30b0-41d4-ba81-e316df2af14b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.897352 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-786b5" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.904589 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.995207 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cb42a5-30b0-41d4-ba81-e316df2af14b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.995252 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pfnd\" (UniqueName: \"kubernetes.io/projected/89cb42a5-30b0-41d4-ba81-e316df2af14b-kube-api-access-7pfnd\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.995278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89cb42a5-30b0-41d4-ba81-e316df2af14b-kolla-config\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.995299 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89cb42a5-30b0-41d4-ba81-e316df2af14b-config-data\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.995318 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cb42a5-30b0-41d4-ba81-e316df2af14b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.996862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/89cb42a5-30b0-41d4-ba81-e316df2af14b-kolla-config\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.997159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89cb42a5-30b0-41d4-ba81-e316df2af14b-config-data\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.999287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cb42a5-30b0-41d4-ba81-e316df2af14b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:04 crc kubenswrapper[4858]: I1205 14:13:04.999445 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/89cb42a5-30b0-41d4-ba81-e316df2af14b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:05 crc kubenswrapper[4858]: I1205 14:13:05.012106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pfnd\" (UniqueName: \"kubernetes.io/projected/89cb42a5-30b0-41d4-ba81-e316df2af14b-kube-api-access-7pfnd\") pod \"memcached-0\" (UID: \"89cb42a5-30b0-41d4-ba81-e316df2af14b\") " pod="openstack/memcached-0" Dec 05 14:13:05 crc kubenswrapper[4858]: I1205 14:13:05.198707 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 14:13:05 crc kubenswrapper[4858]: I1205 14:13:05.997856 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.010971 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-catalog-content\") pod \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.011125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9dw5\" (UniqueName: \"kubernetes.io/projected/7b8fe39c-64da-43dc-ae6c-7d17883a811f-kube-api-access-k9dw5\") pod \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.011288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-utilities\") pod \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\" (UID: \"7b8fe39c-64da-43dc-ae6c-7d17883a811f\") " Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.012179 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-utilities" (OuterVolumeSpecName: "utilities") pod "7b8fe39c-64da-43dc-ae6c-7d17883a811f" (UID: "7b8fe39c-64da-43dc-ae6c-7d17883a811f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.012672 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.028571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8fe39c-64da-43dc-ae6c-7d17883a811f-kube-api-access-k9dw5" (OuterVolumeSpecName: "kube-api-access-k9dw5") pod "7b8fe39c-64da-43dc-ae6c-7d17883a811f" (UID: "7b8fe39c-64da-43dc-ae6c-7d17883a811f"). InnerVolumeSpecName "kube-api-access-k9dw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.113672 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9dw5\" (UniqueName: \"kubernetes.io/projected/7b8fe39c-64da-43dc-ae6c-7d17883a811f-kube-api-access-k9dw5\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.156178 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b8fe39c-64da-43dc-ae6c-7d17883a811f" (UID: "7b8fe39c-64da-43dc-ae6c-7d17883a811f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.215614 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b8fe39c-64da-43dc-ae6c-7d17883a811f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.255117 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:13:06 crc kubenswrapper[4858]: E1205 14:13:06.255727 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="extract-utilities" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.255855 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="extract-utilities" Dec 05 14:13:06 crc kubenswrapper[4858]: E1205 14:13:06.255938 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="registry-server" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.256010 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="registry-server" Dec 05 14:13:06 crc kubenswrapper[4858]: E1205 14:13:06.256085 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="extract-content" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.256171 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="extract-content" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.256479 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" containerName="registry-server" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.257198 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.262341 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bfcrr" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.278413 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.317388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45ht\" (UniqueName: \"kubernetes.io/projected/805d1f07-ba33-4534-8fe0-3697049c2eb6-kube-api-access-l45ht\") pod \"kube-state-metrics-0\" (UID: \"805d1f07-ba33-4534-8fe0-3697049c2eb6\") " pod="openstack/kube-state-metrics-0" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.418724 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45ht\" (UniqueName: \"kubernetes.io/projected/805d1f07-ba33-4534-8fe0-3697049c2eb6-kube-api-access-l45ht\") pod \"kube-state-metrics-0\" (UID: \"805d1f07-ba33-4534-8fe0-3697049c2eb6\") " pod="openstack/kube-state-metrics-0" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.444906 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45ht\" (UniqueName: \"kubernetes.io/projected/805d1f07-ba33-4534-8fe0-3697049c2eb6-kube-api-access-l45ht\") pod \"kube-state-metrics-0\" (UID: \"805d1f07-ba33-4534-8fe0-3697049c2eb6\") " pod="openstack/kube-state-metrics-0" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.576917 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.898628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75gjs" event={"ID":"7b8fe39c-64da-43dc-ae6c-7d17883a811f","Type":"ContainerDied","Data":"ac62137e2e14d7d4e367a4207aac6cb4d4afdd0afda3b8cdcc0ede654f513b70"} Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.898681 4858 scope.go:117] "RemoveContainer" containerID="2f06b7a621cf90b14de829b43670f31ae332506f354632390cbee32e37a7ec6b" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.898699 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75gjs" Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.928769 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75gjs"] Dec 05 14:13:06 crc kubenswrapper[4858]: I1205 14:13:06.941937 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-75gjs"] Dec 05 14:13:07 crc kubenswrapper[4858]: I1205 14:13:07.909391 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8fe39c-64da-43dc-ae6c-7d17883a811f" path="/var/lib/kubelet/pods/7b8fe39c-64da-43dc-ae6c-7d17883a811f/volumes" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.566369 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gtl95"] Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.568520 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.573760 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sqdhh" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.574074 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.574213 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.582943 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtl95"] Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591515 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgktl\" (UniqueName: \"kubernetes.io/projected/07c39bc3-5d28-49a6-88b6-348d08f7b61a-kube-api-access-kgktl\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c39bc3-5d28-49a6-88b6-348d08f7b61a-ovn-controller-tls-certs\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591600 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-run\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591643 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-run-ovn\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07c39bc3-5d28-49a6-88b6-348d08f7b61a-scripts\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591754 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-log-ovn\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.591789 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c39bc3-5d28-49a6-88b6-348d08f7b61a-combined-ca-bundle\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.603413 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kk9tz"] Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.604995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.630673 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kk9tz"] Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693067 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-run\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693133 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-log-ovn\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693194 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c39bc3-5d28-49a6-88b6-348d08f7b61a-combined-ca-bundle\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-log\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgktl\" (UniqueName: \"kubernetes.io/projected/07c39bc3-5d28-49a6-88b6-348d08f7b61a-kube-api-access-kgktl\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693333 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c39bc3-5d28-49a6-88b6-348d08f7b61a-ovn-controller-tls-certs\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693367 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4pl\" (UniqueName: \"kubernetes.io/projected/f902132d-be72-462e-acae-0765edc6a2fd-kube-api-access-wn4pl\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f902132d-be72-462e-acae-0765edc6a2fd-scripts\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-run\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693476 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-run-ovn\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693495 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-lib\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693511 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07c39bc3-5d28-49a6-88b6-348d08f7b61a-scripts\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-etc-ovs\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-log-ovn\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.693800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-run\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.694034 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07c39bc3-5d28-49a6-88b6-348d08f7b61a-var-run-ovn\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.695681 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07c39bc3-5d28-49a6-88b6-348d08f7b61a-scripts\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.701193 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/07c39bc3-5d28-49a6-88b6-348d08f7b61a-ovn-controller-tls-certs\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.701300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c39bc3-5d28-49a6-88b6-348d08f7b61a-combined-ca-bundle\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.713024 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgktl\" (UniqueName: \"kubernetes.io/projected/07c39bc3-5d28-49a6-88b6-348d08f7b61a-kube-api-access-kgktl\") pod \"ovn-controller-gtl95\" (UID: \"07c39bc3-5d28-49a6-88b6-348d08f7b61a\") " pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.795280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f902132d-be72-462e-acae-0765edc6a2fd-scripts\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.795369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-lib\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.795407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-etc-ovs\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.795427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-run\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.795466 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-log\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.795528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn4pl\" (UniqueName: \"kubernetes.io/projected/f902132d-be72-462e-acae-0765edc6a2fd-kube-api-access-wn4pl\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.796046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-run\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.796256 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-lib\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.796262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-var-log\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.796630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f902132d-be72-462e-acae-0765edc6a2fd-etc-ovs\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.798409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f902132d-be72-462e-acae-0765edc6a2fd-scripts\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.812927 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn4pl\" (UniqueName: \"kubernetes.io/projected/f902132d-be72-462e-acae-0765edc6a2fd-kube-api-access-wn4pl\") pod \"ovn-controller-ovs-kk9tz\" (UID: \"f902132d-be72-462e-acae-0765edc6a2fd\") " pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.900837 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtl95" Dec 05 14:13:10 crc kubenswrapper[4858]: I1205 14:13:10.945450 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.424867 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.427258 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.429452 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.430070 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mcgmr" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.430750 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.431084 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.431909 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.446011 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c61018-b6f5-488a-948c-7eacd25c0b8e-config\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507579 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507658 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8cg\" (UniqueName: \"kubernetes.io/projected/c4c61018-b6f5-488a-948c-7eacd25c0b8e-kube-api-access-9t8cg\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507724 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c61018-b6f5-488a-948c-7eacd25c0b8e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c61018-b6f5-488a-948c-7eacd25c0b8e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.507797 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.608915 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.608958 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.608992 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.609012 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8cg\" (UniqueName: \"kubernetes.io/projected/c4c61018-b6f5-488a-948c-7eacd25c0b8e-kube-api-access-9t8cg\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.609035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c61018-b6f5-488a-948c-7eacd25c0b8e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.609056 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c61018-b6f5-488a-948c-7eacd25c0b8e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.609082 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.609108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c61018-b6f5-488a-948c-7eacd25c0b8e-config\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.609847 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4c61018-b6f5-488a-948c-7eacd25c0b8e-config\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.610138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c4c61018-b6f5-488a-948c-7eacd25c0b8e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.610500 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.610911 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4c61018-b6f5-488a-948c-7eacd25c0b8e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.615446 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.615769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.634553 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4c61018-b6f5-488a-948c-7eacd25c0b8e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.634971 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8cg\" (UniqueName: \"kubernetes.io/projected/c4c61018-b6f5-488a-948c-7eacd25c0b8e-kube-api-access-9t8cg\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.653398 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c4c61018-b6f5-488a-948c-7eacd25c0b8e\") " pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:11 crc kubenswrapper[4858]: I1205 14:13:11.750788 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.024616 4858 scope.go:117] "RemoveContainer" containerID="9b9c85222227d1093a669a2871c99f5bc7aaf8a90a1af81c6f1d35df2cb282dd" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.144037 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.148526 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.152143 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.152186 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.152604 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.152739 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-g29vg" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.170120 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247414 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-config\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247449 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247491 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4xs\" (UniqueName: \"kubernetes.io/projected/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-kube-api-access-dx4xs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247513 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.247551 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349495 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-config\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349618 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349642 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4xs\" (UniqueName: \"kubernetes.io/projected/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-kube-api-access-dx4xs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.349704 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.350751 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-config\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.352072 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.352567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.354169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.360087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.368688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.377459 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4xs\" (UniqueName: \"kubernetes.io/projected/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-kube-api-access-dx4xs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.377961 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.382618 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18eb80fb-2c3b-4c85-b52b-e3a0821ba693-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"18eb80fb-2c3b-4c85-b52b-e3a0821ba693\") " pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.478190 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.760050 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:13:14 crc kubenswrapper[4858]: I1205 14:13:14.760197 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:13:19 crc kubenswrapper[4858]: I1205 14:13:19.545171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 14:13:19 crc kubenswrapper[4858]: E1205 14:13:19.971292 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:19 crc kubenswrapper[4858]: E1205 14:13:19.971361 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:19 crc kubenswrapper[4858]: E1205 14:13:19.971490 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqj4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-67d95884df-z95jm_openstack(bdc96fd8-754e-4f76-8e12-d455a6b35a5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:19 crc kubenswrapper[4858]: E1205 14:13:19.972681 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-67d95884df-z95jm" podUID="bdc96fd8-754e-4f76-8e12-d455a6b35a5a" Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.000934 4858 scope.go:117] "RemoveContainer" containerID="49e0062994158789dc2eeb66d51590ffb72545650d599de3f073dfb76d33663f" Dec 05 14:13:20 crc kubenswrapper[4858]: E1205 14:13:20.029200 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:20 crc kubenswrapper[4858]: E1205 14:13:20.029289 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:20 crc kubenswrapper[4858]: E1205 14:13:20.029480 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-neutron-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpdnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-65974b8d89-ss8t7_openstack(3d6ae67d-01f1-4183-a7b7-cd03b427e06a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:20 crc kubenswrapper[4858]: E1205 14:13:20.030958 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" podUID="3d6ae67d-01f1-4183-a7b7-cd03b427e06a" Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.732093 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-784d8f4b89-d828q"] Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.745569 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 14:13:20 crc kubenswrapper[4858]: W1205 14:13:20.778928 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd303d608_2c19_47fa_9623_f84b66025548.slice/crio-da32ff63d2c0e3cf24179facbee75c7cccc460a7b2397c98736929bc996dbe98 WatchSource:0}: Error finding container da32ff63d2c0e3cf24179facbee75c7cccc460a7b2397c98736929bc996dbe98: Status 404 returned error can't find the container with id da32ff63d2c0e3cf24179facbee75c7cccc460a7b2397c98736929bc996dbe98 Dec 05 14:13:20 crc kubenswrapper[4858]: W1205 14:13:20.781294 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod535bf7fb_3e78_4bdb_8ed6_0f6d3b45d09e.slice/crio-affb75cfb2e90b75959d005493cf304c24ea78150a2d8593e0ee0af8e98740f5 WatchSource:0}: Error finding container affb75cfb2e90b75959d005493cf304c24ea78150a2d8593e0ee0af8e98740f5: Status 404 returned error can't find the container with id affb75cfb2e90b75959d005493cf304c24ea78150a2d8593e0ee0af8e98740f5 Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.804339 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.987706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-config\") pod \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.987857 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqj4r\" (UniqueName: \"kubernetes.io/projected/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-kube-api-access-kqj4r\") pod \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\" (UID: \"bdc96fd8-754e-4f76-8e12-d455a6b35a5a\") " Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.988981 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-config" (OuterVolumeSpecName: "config") pod "bdc96fd8-754e-4f76-8e12-d455a6b35a5a" (UID: "bdc96fd8-754e-4f76-8e12-d455a6b35a5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:20 crc kubenswrapper[4858]: I1205 14:13:20.996124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-kube-api-access-kqj4r" (OuterVolumeSpecName: "kube-api-access-kqj4r") pod "bdc96fd8-754e-4f76-8e12-d455a6b35a5a" (UID: "bdc96fd8-754e-4f76-8e12-d455a6b35a5a"). InnerVolumeSpecName "kube-api-access-kqj4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.065933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"89cb42a5-30b0-41d4-ba81-e316df2af14b","Type":"ContainerStarted","Data":"5aec787f2fbf50ab4728f92ff9d60d10453cd7eff7a894d2315d110affddfacd"} Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.068747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67d95884df-z95jm" event={"ID":"bdc96fd8-754e-4f76-8e12-d455a6b35a5a","Type":"ContainerDied","Data":"4622cc3585d88d1d98962cdbb5770e307c7db7e4f5018452258cc6f573732052"} Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.068877 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67d95884df-z95jm" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.075925 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e","Type":"ContainerStarted","Data":"affb75cfb2e90b75959d005493cf304c24ea78150a2d8593e0ee0af8e98740f5"} Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.078019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" event={"ID":"d303d608-2c19-47fa-9623-f84b66025548","Type":"ContainerStarted","Data":"da32ff63d2c0e3cf24179facbee75c7cccc460a7b2397c98736929bc996dbe98"} Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.091185 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqj4r\" (UniqueName: \"kubernetes.io/projected/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-kube-api-access-kqj4r\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.091219 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc96fd8-754e-4f76-8e12-d455a6b35a5a-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.204614 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.217867 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:13:21 crc kubenswrapper[4858]: W1205 14:13:21.235928 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod805d1f07_ba33_4534_8fe0_3697049c2eb6.slice/crio-f22b49108b18b8ed83af2520740d7e26067700d2d0ee7f48ad64c8694993cf62 WatchSource:0}: Error finding container f22b49108b18b8ed83af2520740d7e26067700d2d0ee7f48ad64c8694993cf62: Status 404 returned error can't find the container with id f22b49108b18b8ed83af2520740d7e26067700d2d0ee7f48ad64c8694993cf62 Dec 05 14:13:21 crc kubenswrapper[4858]: W1205 14:13:21.239312 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96d65651_be4c_475d_b4dc_293f42b30e39.slice/crio-4af8e2d9d60a89a6f44a393e31b47ab5794adada5bb2fe67b1cae37debfb7d8f WatchSource:0}: Error finding container 4af8e2d9d60a89a6f44a393e31b47ab5794adada5bb2fe67b1cae37debfb7d8f: Status 404 returned error can't find the container with id 4af8e2d9d60a89a6f44a393e31b47ab5794adada5bb2fe67b1cae37debfb7d8f Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.332223 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67d95884df-z95jm"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.369939 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67d95884df-z95jm"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.386355 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d7d7c8dff-98hcf"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.410465 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.426117 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:13:21 crc kubenswrapper[4858]: W1205 14:13:21.443642 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd99fd616_b195_4da7_b7ac_99bed8479e36.slice/crio-753c601f6aa088a114036a0237762a6955f8124efa4fd621af187c5e304f8a18 WatchSource:0}: Error finding container 753c601f6aa088a114036a0237762a6955f8124efa4fd621af187c5e304f8a18: Status 404 returned error can't find the container with id 753c601f6aa088a114036a0237762a6955f8124efa4fd621af187c5e304f8a18 Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.594310 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtl95"] Dec 05 14:13:21 crc kubenswrapper[4858]: W1205 14:13:21.608560 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c39bc3_5d28_49a6_88b6_348d08f7b61a.slice/crio-a8b0cc6f520a2366954f874ea61e886cd467626ea333b6ec1e5563a3f7d35a08 WatchSource:0}: Error finding container a8b0cc6f520a2366954f874ea61e886cd467626ea333b6ec1e5563a3f7d35a08: Status 404 returned error can't find the container with id a8b0cc6f520a2366954f874ea61e886cd467626ea333b6ec1e5563a3f7d35a08 Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.667067 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.716306 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.808250 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kk9tz"] Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.820322 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-config\") pod \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.820430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-dns-svc\") pod \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.820568 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpdnk\" (UniqueName: \"kubernetes.io/projected/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-kube-api-access-fpdnk\") pod \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\" (UID: \"3d6ae67d-01f1-4183-a7b7-cd03b427e06a\") " Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.822681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-config" (OuterVolumeSpecName: "config") pod "3d6ae67d-01f1-4183-a7b7-cd03b427e06a" (UID: "3d6ae67d-01f1-4183-a7b7-cd03b427e06a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.827503 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d6ae67d-01f1-4183-a7b7-cd03b427e06a" (UID: "3d6ae67d-01f1-4183-a7b7-cd03b427e06a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:21 crc kubenswrapper[4858]: W1205 14:13:21.833527 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902132d_be72_462e_acae_0765edc6a2fd.slice/crio-83f4077ae6881d3b5d7d99b291d821b29e1efc6c764f8bed2cd0c3c513237f7b WatchSource:0}: Error finding container 83f4077ae6881d3b5d7d99b291d821b29e1efc6c764f8bed2cd0c3c513237f7b: Status 404 returned error can't find the container with id 83f4077ae6881d3b5d7d99b291d821b29e1efc6c764f8bed2cd0c3c513237f7b Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.835744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-kube-api-access-fpdnk" (OuterVolumeSpecName: "kube-api-access-fpdnk") pod "3d6ae67d-01f1-4183-a7b7-cd03b427e06a" (UID: "3d6ae67d-01f1-4183-a7b7-cd03b427e06a"). InnerVolumeSpecName "kube-api-access-fpdnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.927789 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpdnk\" (UniqueName: \"kubernetes.io/projected/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-kube-api-access-fpdnk\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.927835 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.927850 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d6ae67d-01f1-4183-a7b7-cd03b427e06a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:21 crc kubenswrapper[4858]: I1205 14:13:21.940635 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc96fd8-754e-4f76-8e12-d455a6b35a5a" path="/var/lib/kubelet/pods/bdc96fd8-754e-4f76-8e12-d455a6b35a5a/volumes" Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.109793 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"805d1f07-ba33-4534-8fe0-3697049c2eb6","Type":"ContainerStarted","Data":"f22b49108b18b8ed83af2520740d7e26067700d2d0ee7f48ad64c8694993cf62"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.123636 4858 generic.go:334] "Generic (PLEG): container finished" podID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerID="15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b" exitCode=0 Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.123729 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" event={"ID":"6971d622-3415-4baa-88e7-e68b8e2323ae","Type":"ContainerDied","Data":"15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.123777 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" event={"ID":"6971d622-3415-4baa-88e7-e68b8e2323ae","Type":"ContainerStarted","Data":"9c56f6ffffcb31d5af3b8480807aedd96882c67ba981ca0cc2bd54328b5e1779"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.129364 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d99fd616-b195-4da7-b7ac-99bed8479e36","Type":"ContainerStarted","Data":"753c601f6aa088a114036a0237762a6955f8124efa4fd621af187c5e304f8a18"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.143997 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96d65651-be4c-475d-b4dc-293f42b30e39","Type":"ContainerStarted","Data":"4af8e2d9d60a89a6f44a393e31b47ab5794adada5bb2fe67b1cae37debfb7d8f"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.149601 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk9tz" event={"ID":"f902132d-be72-462e-acae-0765edc6a2fd","Type":"ContainerStarted","Data":"83f4077ae6881d3b5d7d99b291d821b29e1efc6c764f8bed2cd0c3c513237f7b"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.151225 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"709c2e19-3180-41ef-9341-df5e95e1733a","Type":"ContainerStarted","Data":"cc69ca84444fa011a01e25307113683b1803acac88aad1ab8c179117d7e0989b"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.158646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" event={"ID":"3d6ae67d-01f1-4183-a7b7-cd03b427e06a","Type":"ContainerDied","Data":"b0ac0c8a8eb956a2ab8c55c13e807688b70144c50565d7f6d33e6909546205b4"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.158729 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65974b8d89-ss8t7" Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.160948 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtl95" event={"ID":"07c39bc3-5d28-49a6-88b6-348d08f7b61a","Type":"ContainerStarted","Data":"a8b0cc6f520a2366954f874ea61e886cd467626ea333b6ec1e5563a3f7d35a08"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.162338 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"18eb80fb-2c3b-4c85-b52b-e3a0821ba693","Type":"ContainerStarted","Data":"ee6c5afc3e1ea31f808168bbbdd04d735dca47cc2f5ec7ab9c95ee8349cbc237"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.171301 4858 generic.go:334] "Generic (PLEG): container finished" podID="d303d608-2c19-47fa-9623-f84b66025548" containerID="f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92" exitCode=0 Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.171333 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" event={"ID":"d303d608-2c19-47fa-9623-f84b66025548","Type":"ContainerDied","Data":"f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92"} Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.279846 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65974b8d89-ss8t7"] Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.294417 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65974b8d89-ss8t7"] Dec 05 14:13:22 crc kubenswrapper[4858]: I1205 14:13:22.614355 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 14:13:23 crc kubenswrapper[4858]: I1205 14:13:23.954010 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6ae67d-01f1-4183-a7b7-cd03b427e06a" path="/var/lib/kubelet/pods/3d6ae67d-01f1-4183-a7b7-cd03b427e06a/volumes" Dec 05 14:13:24 crc kubenswrapper[4858]: W1205 14:13:24.386053 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4c61018_b6f5_488a_948c_7eacd25c0b8e.slice/crio-49c436ab2af27ffb500893e790977d3ff83f4f895a16258bf444e234bd931940 WatchSource:0}: Error finding container 49c436ab2af27ffb500893e790977d3ff83f4f895a16258bf444e234bd931940: Status 404 returned error can't find the container with id 49c436ab2af27ffb500893e790977d3ff83f4f895a16258bf444e234bd931940 Dec 05 14:13:25 crc kubenswrapper[4858]: I1205 14:13:25.197689 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c4c61018-b6f5-488a-948c-7eacd25c0b8e","Type":"ContainerStarted","Data":"49c436ab2af27ffb500893e790977d3ff83f4f895a16258bf444e234bd931940"} Dec 05 14:13:33 crc kubenswrapper[4858]: E1205 14:13:33.955316 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:33 crc kubenswrapper[4858]: E1205 14:13:33.955924 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:33 crc kubenswrapper[4858]: E1205 14:13:33.956111 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-88vg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(d99fd616-b195-4da7-b7ac-99bed8479e36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:33 crc kubenswrapper[4858]: E1205 14:13:33.957236 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.081810 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wrph5"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.083807 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.088221 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.098734 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wrph5"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.165162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-ovs-rundir\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.165238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-combined-ca-bundle\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.165263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-ovn-rundir\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.165306 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twmgf\" (UniqueName: \"kubernetes.io/projected/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-kube-api-access-twmgf\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.165407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.165438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-config\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.242470 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d8f4b89-d828q"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.266505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.266566 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-config\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.266605 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-ovs-rundir\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.266654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-combined-ca-bundle\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.266676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-ovn-rundir\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.266701 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twmgf\" (UniqueName: \"kubernetes.io/projected/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-kube-api-access-twmgf\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.267685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-config\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.268194 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-ovn-rundir\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.268251 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-ovs-rundir\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: E1205 14:13:34.268445 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/rabbitmq-server-0" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.274139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.274435 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-combined-ca-bundle\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.328800 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74b9cbccdc-86495"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.330205 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.341778 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.343657 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b9cbccdc-86495"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.372596 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twmgf\" (UniqueName: \"kubernetes.io/projected/994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f-kube-api-access-twmgf\") pod \"ovn-controller-metrics-wrph5\" (UID: \"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f\") " pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.431686 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wrph5" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.472219 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-dns-svc\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.472313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-config\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.472365 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.472395 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhtd\" (UniqueName: \"kubernetes.io/projected/fb68e5db-617e-469b-ada4-41ae2d186f8b-kube-api-access-zlhtd\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.528512 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d7d7c8dff-98hcf"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.566620 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d45fc4855-kd46w"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.568252 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: W1205 14:13:34.571445 4858 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: configmaps "ovsdbserver-sb" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 05 14:13:34 crc kubenswrapper[4858]: E1205 14:13:34.571694 4858 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovsdbserver-sb\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.574176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.574320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlhtd\" (UniqueName: \"kubernetes.io/projected/fb68e5db-617e-469b-ada4-41ae2d186f8b-kube-api-access-zlhtd\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.574475 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-dns-svc\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.574580 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-config\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.575534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-ovsdbserver-nb\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.576031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-config\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.576784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-dns-svc\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.597587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d45fc4855-kd46w"] Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.625159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlhtd\" (UniqueName: \"kubernetes.io/projected/fb68e5db-617e-469b-ada4-41ae2d186f8b-kube-api-access-zlhtd\") pod \"dnsmasq-dns-74b9cbccdc-86495\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.657109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.676259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-dns-svc\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.676328 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.676357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.676375 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-config\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.676431 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfxk\" (UniqueName: \"kubernetes.io/projected/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-kube-api-access-mpfxk\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.783132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfxk\" (UniqueName: \"kubernetes.io/projected/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-kube-api-access-mpfxk\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.783313 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-dns-svc\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.783407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.783454 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.783482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-config\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.784946 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-dns-svc\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.785214 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-config\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.787339 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:34 crc kubenswrapper[4858]: I1205 14:13:34.808410 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfxk\" (UniqueName: \"kubernetes.io/projected/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-kube-api-access-mpfxk\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:35 crc kubenswrapper[4858]: E1205 14:13:35.784713 4858 configmap.go:193] Couldn't get configMap openstack/ovsdbserver-sb: failed to sync configmap cache: timed out waiting for the condition Dec 05 14:13:35 crc kubenswrapper[4858]: E1205 14:13:35.785077 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb podName:e50a36d0-5f6f-49a0-92df-08fe6f997a4d nodeName:}" failed. No retries permitted until 2025-12-05 14:13:36.285057007 +0000 UTC m=+1024.832655146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb") pod "dnsmasq-dns-6d45fc4855-kd46w" (UID: "e50a36d0-5f6f-49a0-92df-08fe6f997a4d") : failed to sync configmap cache: timed out waiting for the condition Dec 05 14:13:35 crc kubenswrapper[4858]: I1205 14:13:35.894847 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 05 14:13:36 crc kubenswrapper[4858]: I1205 14:13:36.363778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:36 crc kubenswrapper[4858]: I1205 14:13:36.364758 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d45fc4855-kd46w\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:36 crc kubenswrapper[4858]: I1205 14:13:36.389586 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.096308 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.096402 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.096572 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghgrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.098088 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.197803 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.197876 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.198032 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzv22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(96d65651-be4c-475d-b4dc-293f42b30e39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.199371 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.231479 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.231526 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.231632 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54zhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(709c2e19-3180-41ef-9341-df5e95e1733a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.232838 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="709c2e19-3180-41ef-9341-df5e95e1733a" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.295059 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-rabbitmq:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.298484 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.298729 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-mariadb:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="709c2e19-3180-41ef-9341-df5e95e1733a" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.473334 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-controller:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.473398 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-controller:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.473573 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-controller:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h595h667h5ddh95hcbh8bh588h5f9h579h6h5f5h5f7h668h557h68fh67ch576h67fhdfh56ch694h55fh5b5h549h67bh5f6h85h598h67bh667h564q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgktl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-gtl95_openstack(07c39bc3-5d28-49a6-88b6-348d08f7b61a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.474900 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-gtl95" podUID="07c39bc3-5d28-49a6-88b6-348d08f7b61a" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.822747 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.823026 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:13:37 crc kubenswrapper[4858]: E1205 14:13:37.823210 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd4h544hc6h5d5h67bhcdh5c7h5fchb4h5f6h5ddh57h94h58h667h5f7h68h679h587h57chc6h5d5h7ch5b7hffh58h644h4hcdh67h5f8h74q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9t8cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(c4c61018-b6f5-488a-948c-7eacd25c0b8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:13:38 crc kubenswrapper[4858]: E1205 14:13:38.307361 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-controller:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/ovn-controller-gtl95" podUID="07c39bc3-5d28-49a6-88b6-348d08f7b61a" Dec 05 14:13:38 crc kubenswrapper[4858]: I1205 14:13:38.512207 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d45fc4855-kd46w"] Dec 05 14:13:38 crc kubenswrapper[4858]: I1205 14:13:38.517999 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wrph5"] Dec 05 14:13:38 crc kubenswrapper[4858]: I1205 14:13:38.658723 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b9cbccdc-86495"] Dec 05 14:13:38 crc kubenswrapper[4858]: E1205 14:13:38.834584 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 05 14:13:38 crc kubenswrapper[4858]: E1205 14:13:38.834659 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 05 14:13:38 crc kubenswrapper[4858]: E1205 14:13:38.834806 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l45ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(805d1f07-ba33-4534-8fe0-3697049c2eb6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:13:38 crc kubenswrapper[4858]: E1205 14:13:38.836109 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" Dec 05 14:13:38 crc kubenswrapper[4858]: W1205 14:13:38.838766 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode50a36d0_5f6f_49a0_92df_08fe6f997a4d.slice/crio-edb66ea29db6d018d57d1deb3710e3ef32ee075fae5b078e59cecb58f8d43ed0 WatchSource:0}: Error finding container edb66ea29db6d018d57d1deb3710e3ef32ee075fae5b078e59cecb58f8d43ed0: Status 404 returned error can't find the container with id edb66ea29db6d018d57d1deb3710e3ef32ee075fae5b078e59cecb58f8d43ed0 Dec 05 14:13:38 crc kubenswrapper[4858]: W1205 14:13:38.840757 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb68e5db_617e_469b_ada4_41ae2d186f8b.slice/crio-615a46f1858cac88969ae187a24bf165c0e5f110ba41c6ad816c66ba823e4903 WatchSource:0}: Error finding container 615a46f1858cac88969ae187a24bf165c0e5f110ba41c6ad816c66ba823e4903: Status 404 returned error can't find the container with id 615a46f1858cac88969ae187a24bf165c0e5f110ba41c6ad816c66ba823e4903 Dec 05 14:13:38 crc kubenswrapper[4858]: W1205 14:13:38.848429 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod994a3e0f_1bc4_4b50_9f4f_dfc07fe5ce8f.slice/crio-f5a878c1ae5db57eb05b0e9121f518c112fed3fb9c64dca427e1b638ca0d633e WatchSource:0}: Error finding container f5a878c1ae5db57eb05b0e9121f518c112fed3fb9c64dca427e1b638ca0d633e: Status 404 returned error can't find the container with id f5a878c1ae5db57eb05b0e9121f518c112fed3fb9c64dca427e1b638ca0d633e Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.317191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wrph5" event={"ID":"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f","Type":"ContainerStarted","Data":"f5a878c1ae5db57eb05b0e9121f518c112fed3fb9c64dca427e1b638ca0d633e"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.319283 4858 generic.go:334] "Generic (PLEG): container finished" podID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerID="8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724" exitCode=0 Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.319386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" event={"ID":"fb68e5db-617e-469b-ada4-41ae2d186f8b","Type":"ContainerDied","Data":"8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.319408 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" event={"ID":"fb68e5db-617e-469b-ada4-41ae2d186f8b","Type":"ContainerStarted","Data":"615a46f1858cac88969ae187a24bf165c0e5f110ba41c6ad816c66ba823e4903"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.323344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" event={"ID":"6971d622-3415-4baa-88e7-e68b8e2323ae","Type":"ContainerStarted","Data":"c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.323508 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerName="dnsmasq-dns" containerID="cri-o://c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127" gracePeriod=10 Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.323615 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.326099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"18eb80fb-2c3b-4c85-b52b-e3a0821ba693","Type":"ContainerStarted","Data":"b47ff9247ef547e0560600f5764687aecea4ec8310304a4157cffceffa8826b7"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.328055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" event={"ID":"d303d608-2c19-47fa-9623-f84b66025548","Type":"ContainerStarted","Data":"a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.328185 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" podUID="d303d608-2c19-47fa-9623-f84b66025548" containerName="dnsmasq-dns" containerID="cri-o://a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201" gracePeriod=10 Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.328255 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.332610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"89cb42a5-30b0-41d4-ba81-e316df2af14b","Type":"ContainerStarted","Data":"86d926f5510e4d3d1155e5c6e24008e0f68c30642861a35f1796dafc7ccf353c"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.333235 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.342505 4858 generic.go:334] "Generic (PLEG): container finished" podID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerID="f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f" exitCode=0 Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.342569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" event={"ID":"e50a36d0-5f6f-49a0-92df-08fe6f997a4d","Type":"ContainerDied","Data":"f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.342594 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" event={"ID":"e50a36d0-5f6f-49a0-92df-08fe6f997a4d","Type":"ContainerStarted","Data":"edb66ea29db6d018d57d1deb3710e3ef32ee075fae5b078e59cecb58f8d43ed0"} Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.353758 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk9tz" event={"ID":"f902132d-be72-462e-acae-0765edc6a2fd","Type":"ContainerStarted","Data":"3bc3bbe938af70bb303ab0c2ed28f75f6396e8c94f6d93da1bed512f9a034390"} Dec 05 14:13:39 crc kubenswrapper[4858]: E1205 14:13:39.362365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.388522 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" podStartSLOduration=40.302076599 podStartE2EDuration="40.38848493s" podCreationTimestamp="2025-12-05 14:12:59 +0000 UTC" firstStartedPulling="2025-12-05 14:13:20.786061414 +0000 UTC m=+1009.333659553" lastFinishedPulling="2025-12-05 14:13:20.872469745 +0000 UTC m=+1009.420067884" observedRunningTime="2025-12-05 14:13:39.363282273 +0000 UTC m=+1027.910880412" watchObservedRunningTime="2025-12-05 14:13:39.38848493 +0000 UTC m=+1027.936083069" Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.400277 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" podStartSLOduration=39.400251837 podStartE2EDuration="39.400251837s" podCreationTimestamp="2025-12-05 14:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:13:39.39704271 +0000 UTC m=+1027.944640839" watchObservedRunningTime="2025-12-05 14:13:39.400251837 +0000 UTC m=+1027.947849966" Dec 05 14:13:39 crc kubenswrapper[4858]: I1205 14:13:39.427146 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.972529209 podStartE2EDuration="35.427127999s" podCreationTimestamp="2025-12-05 14:13:04 +0000 UTC" firstStartedPulling="2025-12-05 14:13:20.093563501 +0000 UTC m=+1008.641161640" lastFinishedPulling="2025-12-05 14:13:37.548162281 +0000 UTC m=+1026.095760430" observedRunningTime="2025-12-05 14:13:39.418683912 +0000 UTC m=+1027.966282041" watchObservedRunningTime="2025-12-05 14:13:39.427127999 +0000 UTC m=+1027.974726138" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.012615 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.016008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.059176 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-config\") pod \"d303d608-2c19-47fa-9623-f84b66025548\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.059223 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-dns-svc\") pod \"d303d608-2c19-47fa-9623-f84b66025548\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.059277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-dns-svc\") pod \"6971d622-3415-4baa-88e7-e68b8e2323ae\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.059295 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nw6d\" (UniqueName: \"kubernetes.io/projected/d303d608-2c19-47fa-9623-f84b66025548-kube-api-access-8nw6d\") pod \"d303d608-2c19-47fa-9623-f84b66025548\" (UID: \"d303d608-2c19-47fa-9623-f84b66025548\") " Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.059324 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-config\") pod \"6971d622-3415-4baa-88e7-e68b8e2323ae\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.059344 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz7lx\" (UniqueName: \"kubernetes.io/projected/6971d622-3415-4baa-88e7-e68b8e2323ae-kube-api-access-zz7lx\") pod \"6971d622-3415-4baa-88e7-e68b8e2323ae\" (UID: \"6971d622-3415-4baa-88e7-e68b8e2323ae\") " Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.073473 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d303d608-2c19-47fa-9623-f84b66025548-kube-api-access-8nw6d" (OuterVolumeSpecName: "kube-api-access-8nw6d") pod "d303d608-2c19-47fa-9623-f84b66025548" (UID: "d303d608-2c19-47fa-9623-f84b66025548"). InnerVolumeSpecName "kube-api-access-8nw6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.085613 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6971d622-3415-4baa-88e7-e68b8e2323ae-kube-api-access-zz7lx" (OuterVolumeSpecName: "kube-api-access-zz7lx") pod "6971d622-3415-4baa-88e7-e68b8e2323ae" (UID: "6971d622-3415-4baa-88e7-e68b8e2323ae"). InnerVolumeSpecName "kube-api-access-zz7lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.119020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6971d622-3415-4baa-88e7-e68b8e2323ae" (UID: "6971d622-3415-4baa-88e7-e68b8e2323ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.136450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-config" (OuterVolumeSpecName: "config") pod "d303d608-2c19-47fa-9623-f84b66025548" (UID: "d303d608-2c19-47fa-9623-f84b66025548"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.157486 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d303d608-2c19-47fa-9623-f84b66025548" (UID: "d303d608-2c19-47fa-9623-f84b66025548"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.161544 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.161573 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d303d608-2c19-47fa-9623-f84b66025548-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.161582 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.161591 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nw6d\" (UniqueName: \"kubernetes.io/projected/d303d608-2c19-47fa-9623-f84b66025548-kube-api-access-8nw6d\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.161603 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz7lx\" (UniqueName: \"kubernetes.io/projected/6971d622-3415-4baa-88e7-e68b8e2323ae-kube-api-access-zz7lx\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.167486 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-config" (OuterVolumeSpecName: "config") pod "6971d622-3415-4baa-88e7-e68b8e2323ae" (UID: "6971d622-3415-4baa-88e7-e68b8e2323ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.264559 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6971d622-3415-4baa-88e7-e68b8e2323ae-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.370554 4858 generic.go:334] "Generic (PLEG): container finished" podID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerID="c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127" exitCode=0 Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.370664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" event={"ID":"6971d622-3415-4baa-88e7-e68b8e2323ae","Type":"ContainerDied","Data":"c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.370702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" event={"ID":"6971d622-3415-4baa-88e7-e68b8e2323ae","Type":"ContainerDied","Data":"9c56f6ffffcb31d5af3b8480807aedd96882c67ba981ca0cc2bd54328b5e1779"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.370712 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d7d7c8dff-98hcf" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.370734 4858 scope.go:117] "RemoveContainer" containerID="c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.376233 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" event={"ID":"e50a36d0-5f6f-49a0-92df-08fe6f997a4d","Type":"ContainerStarted","Data":"0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.377273 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.385782 4858 generic.go:334] "Generic (PLEG): container finished" podID="f902132d-be72-462e-acae-0765edc6a2fd" containerID="3bc3bbe938af70bb303ab0c2ed28f75f6396e8c94f6d93da1bed512f9a034390" exitCode=0 Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.385942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk9tz" event={"ID":"f902132d-be72-462e-acae-0765edc6a2fd","Type":"ContainerDied","Data":"3bc3bbe938af70bb303ab0c2ed28f75f6396e8c94f6d93da1bed512f9a034390"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.399702 4858 generic.go:334] "Generic (PLEG): container finished" podID="d303d608-2c19-47fa-9623-f84b66025548" containerID="a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201" exitCode=0 Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.399905 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.400685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" event={"ID":"d303d608-2c19-47fa-9623-f84b66025548","Type":"ContainerDied","Data":"a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.400771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-784d8f4b89-d828q" event={"ID":"d303d608-2c19-47fa-9623-f84b66025548","Type":"ContainerDied","Data":"da32ff63d2c0e3cf24179facbee75c7cccc460a7b2397c98736929bc996dbe98"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.406660 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" event={"ID":"fb68e5db-617e-469b-ada4-41ae2d186f8b","Type":"ContainerStarted","Data":"e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062"} Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.406714 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.408548 4858 scope.go:117] "RemoveContainer" containerID="15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.411368 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" podStartSLOduration=6.411333259 podStartE2EDuration="6.411333259s" podCreationTimestamp="2025-12-05 14:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:13:40.40280224 +0000 UTC m=+1028.950400399" watchObservedRunningTime="2025-12-05 14:13:40.411333259 +0000 UTC m=+1028.958931398" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.436899 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d7d7c8dff-98hcf"] Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.453444 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d7d7c8dff-98hcf"] Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.471816 4858 scope.go:117] "RemoveContainer" containerID="c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127" Dec 05 14:13:40 crc kubenswrapper[4858]: E1205 14:13:40.472320 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127\": container with ID starting with c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127 not found: ID does not exist" containerID="c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.472372 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127"} err="failed to get container status \"c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127\": rpc error: code = NotFound desc = could not find container \"c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127\": container with ID starting with c1680c1a0a3232118c8eca6d0c36cec9a8e6fe7d2031f163b358492e0dbd8127 not found: ID does not exist" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.472394 4858 scope.go:117] "RemoveContainer" containerID="15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b" Dec 05 14:13:40 crc kubenswrapper[4858]: E1205 14:13:40.472632 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b\": container with ID starting with 15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b not found: ID does not exist" containerID="15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.472649 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b"} err="failed to get container status \"15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b\": rpc error: code = NotFound desc = could not find container \"15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b\": container with ID starting with 15cbaa2120346fd8423b2f4baa5ef5cef58dd38665f844ef16205f7918df8f0b not found: ID does not exist" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.472665 4858 scope.go:117] "RemoveContainer" containerID="a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.490637 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" podStartSLOduration=6.490610538 podStartE2EDuration="6.490610538s" podCreationTimestamp="2025-12-05 14:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:13:40.46686765 +0000 UTC m=+1029.014465789" watchObservedRunningTime="2025-12-05 14:13:40.490610538 +0000 UTC m=+1029.038208677" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.493280 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-784d8f4b89-d828q"] Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.500123 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-784d8f4b89-d828q"] Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.510877 4858 scope.go:117] "RemoveContainer" containerID="f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.534098 4858 scope.go:117] "RemoveContainer" containerID="a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201" Dec 05 14:13:40 crc kubenswrapper[4858]: E1205 14:13:40.535035 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201\": container with ID starting with a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201 not found: ID does not exist" containerID="a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.535099 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201"} err="failed to get container status \"a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201\": rpc error: code = NotFound desc = could not find container \"a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201\": container with ID starting with a15acf3e0a6ebde74e270da55400833507b203c2efe1aae3a42577a5fc55a201 not found: ID does not exist" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.535146 4858 scope.go:117] "RemoveContainer" containerID="f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92" Dec 05 14:13:40 crc kubenswrapper[4858]: E1205 14:13:40.535647 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92\": container with ID starting with f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92 not found: ID does not exist" containerID="f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92" Dec 05 14:13:40 crc kubenswrapper[4858]: I1205 14:13:40.535683 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92"} err="failed to get container status \"f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92\": rpc error: code = NotFound desc = could not find container \"f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92\": container with ID starting with f5240b47f0816656996c2cdc4381f31a61fa9f99fd339a0adb8e6898e8511c92 not found: ID does not exist" Dec 05 14:13:41 crc kubenswrapper[4858]: I1205 14:13:41.415711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk9tz" event={"ID":"f902132d-be72-462e-acae-0765edc6a2fd","Type":"ContainerStarted","Data":"0d0068f43140031d69c78a0c2b08688dd8a6a37734c5aacd19e3c48aacbb7f76"} Dec 05 14:13:41 crc kubenswrapper[4858]: I1205 14:13:41.416054 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kk9tz" event={"ID":"f902132d-be72-462e-acae-0765edc6a2fd","Type":"ContainerStarted","Data":"f0bbc8229323e01c6fc9286ca7ded07aab8c5a6763f7f3512642fb5f56b903b5"} Dec 05 14:13:41 crc kubenswrapper[4858]: I1205 14:13:41.416197 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:41 crc kubenswrapper[4858]: I1205 14:13:41.435415 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kk9tz" podStartSLOduration=15.718706136 podStartE2EDuration="31.435399319s" podCreationTimestamp="2025-12-05 14:13:10 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.838519518 +0000 UTC m=+1010.386117657" lastFinishedPulling="2025-12-05 14:13:37.555212701 +0000 UTC m=+1026.102810840" observedRunningTime="2025-12-05 14:13:41.434465035 +0000 UTC m=+1029.982063184" watchObservedRunningTime="2025-12-05 14:13:41.435399319 +0000 UTC m=+1029.982997458" Dec 05 14:13:41 crc kubenswrapper[4858]: I1205 14:13:41.911965 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" path="/var/lib/kubelet/pods/6971d622-3415-4baa-88e7-e68b8e2323ae/volumes" Dec 05 14:13:41 crc kubenswrapper[4858]: I1205 14:13:41.912550 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d303d608-2c19-47fa-9623-f84b66025548" path="/var/lib/kubelet/pods/d303d608-2c19-47fa-9623-f84b66025548/volumes" Dec 05 14:13:42 crc kubenswrapper[4858]: I1205 14:13:42.428298 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:13:44 crc kubenswrapper[4858]: I1205 14:13:44.659995 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:44 crc kubenswrapper[4858]: I1205 14:13:44.759938 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:13:44 crc kubenswrapper[4858]: I1205 14:13:44.760457 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:13:45 crc kubenswrapper[4858]: I1205 14:13:45.201014 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.392005 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.495318 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b9cbccdc-86495"] Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.495769 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerName="dnsmasq-dns" containerID="cri-o://e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062" gracePeriod=10 Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.830201 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78d44df849-7lnbz"] Dec 05 14:13:46 crc kubenswrapper[4858]: E1205 14:13:46.830757 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerName="init" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.830776 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerName="init" Dec 05 14:13:46 crc kubenswrapper[4858]: E1205 14:13:46.830800 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d303d608-2c19-47fa-9623-f84b66025548" containerName="dnsmasq-dns" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.830809 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d303d608-2c19-47fa-9623-f84b66025548" containerName="dnsmasq-dns" Dec 05 14:13:46 crc kubenswrapper[4858]: E1205 14:13:46.830837 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d303d608-2c19-47fa-9623-f84b66025548" containerName="init" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.830844 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d303d608-2c19-47fa-9623-f84b66025548" containerName="init" Dec 05 14:13:46 crc kubenswrapper[4858]: E1205 14:13:46.830862 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerName="dnsmasq-dns" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.830872 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerName="dnsmasq-dns" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.831022 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d303d608-2c19-47fa-9623-f84b66025548" containerName="dnsmasq-dns" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.831042 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6971d622-3415-4baa-88e7-e68b8e2323ae" containerName="dnsmasq-dns" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.832000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.866703 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d44df849-7lnbz"] Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.896320 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-nb\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.896373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gr9\" (UniqueName: \"kubernetes.io/projected/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-kube-api-access-z9gr9\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.896418 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-config\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.896464 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-sb\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:46 crc kubenswrapper[4858]: I1205 14:13:46.896493 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-dns-svc\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:46 crc kubenswrapper[4858]: E1205 14:13:46.967504 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="c4c61018-b6f5-488a-948c-7eacd25c0b8e" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.998029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-sb\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.998089 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-dns-svc\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.998198 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-nb\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.998227 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gr9\" (UniqueName: \"kubernetes.io/projected/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-kube-api-access-z9gr9\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.998280 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-config\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.999068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-config\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.999110 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-sb\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.999128 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-dns-svc\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:46.999515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-nb\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.028889 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gr9\" (UniqueName: \"kubernetes.io/projected/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-kube-api-access-z9gr9\") pod \"dnsmasq-dns-78d44df849-7lnbz\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.164317 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.212327 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.303414 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-config\") pod \"fb68e5db-617e-469b-ada4-41ae2d186f8b\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.303497 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlhtd\" (UniqueName: \"kubernetes.io/projected/fb68e5db-617e-469b-ada4-41ae2d186f8b-kube-api-access-zlhtd\") pod \"fb68e5db-617e-469b-ada4-41ae2d186f8b\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.303522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-dns-svc\") pod \"fb68e5db-617e-469b-ada4-41ae2d186f8b\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.303559 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-ovsdbserver-nb\") pod \"fb68e5db-617e-469b-ada4-41ae2d186f8b\" (UID: \"fb68e5db-617e-469b-ada4-41ae2d186f8b\") " Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.308726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb68e5db-617e-469b-ada4-41ae2d186f8b-kube-api-access-zlhtd" (OuterVolumeSpecName: "kube-api-access-zlhtd") pod "fb68e5db-617e-469b-ada4-41ae2d186f8b" (UID: "fb68e5db-617e-469b-ada4-41ae2d186f8b"). InnerVolumeSpecName "kube-api-access-zlhtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.407232 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlhtd\" (UniqueName: \"kubernetes.io/projected/fb68e5db-617e-469b-ada4-41ae2d186f8b-kube-api-access-zlhtd\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.412809 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb68e5db-617e-469b-ada4-41ae2d186f8b" (UID: "fb68e5db-617e-469b-ada4-41ae2d186f8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.418418 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-config" (OuterVolumeSpecName: "config") pod "fb68e5db-617e-469b-ada4-41ae2d186f8b" (UID: "fb68e5db-617e-469b-ada4-41ae2d186f8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.432484 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb68e5db-617e-469b-ada4-41ae2d186f8b" (UID: "fb68e5db-617e-469b-ada4-41ae2d186f8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.486051 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wrph5" event={"ID":"994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f","Type":"ContainerStarted","Data":"d7e225d3d22359f1165938c1a9dfba97876836e3e08585ad7824cabf437f057e"} Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.501749 4858 generic.go:334] "Generic (PLEG): container finished" podID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerID="e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062" exitCode=0 Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.501816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" event={"ID":"fb68e5db-617e-469b-ada4-41ae2d186f8b","Type":"ContainerDied","Data":"e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062"} Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.501854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" event={"ID":"fb68e5db-617e-469b-ada4-41ae2d186f8b","Type":"ContainerDied","Data":"615a46f1858cac88969ae187a24bf165c0e5f110ba41c6ad816c66ba823e4903"} Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.501870 4858 scope.go:117] "RemoveContainer" containerID="e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.501969 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b9cbccdc-86495" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.509128 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.509162 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.509171 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb68e5db-617e-469b-ada4-41ae2d186f8b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.511373 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wrph5" podStartSLOduration=5.952268806 podStartE2EDuration="13.511350417s" podCreationTimestamp="2025-12-05 14:13:34 +0000 UTC" firstStartedPulling="2025-12-05 14:13:38.85989107 +0000 UTC m=+1027.407489209" lastFinishedPulling="2025-12-05 14:13:46.418972681 +0000 UTC m=+1034.966570820" observedRunningTime="2025-12-05 14:13:47.50551106 +0000 UTC m=+1036.053109199" watchObservedRunningTime="2025-12-05 14:13:47.511350417 +0000 UTC m=+1036.058948556" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.549887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c4c61018-b6f5-488a-948c-7eacd25c0b8e","Type":"ContainerStarted","Data":"6fb2e1625b5f7780e18083f21efbc82baff8daa411cc1e10b7b16f32ffc81ce0"} Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.561794 4858 scope.go:117] "RemoveContainer" containerID="8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.582317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"18eb80fb-2c3b-4c85-b52b-e3a0821ba693","Type":"ContainerStarted","Data":"6980ee5210f61b1a0e8a5c47e26254d3287afa459ccaf9c013b516cd3c1e8372"} Dec 05 14:13:47 crc kubenswrapper[4858]: E1205 14:13:47.582661 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-ovn-nb-db-server:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="c4c61018-b6f5-488a-948c-7eacd25c0b8e" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.643087 4858 scope.go:117] "RemoveContainer" containerID="e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062" Dec 05 14:13:47 crc kubenswrapper[4858]: E1205 14:13:47.643631 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062\": container with ID starting with e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062 not found: ID does not exist" containerID="e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.643670 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062"} err="failed to get container status \"e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062\": rpc error: code = NotFound desc = could not find container \"e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062\": container with ID starting with e7ec755152b5db14ee08df814a7047990e9241842a936c7e97ca596e1af8e062 not found: ID does not exist" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.643697 4858 scope.go:117] "RemoveContainer" containerID="8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724" Dec 05 14:13:47 crc kubenswrapper[4858]: E1205 14:13:47.643995 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724\": container with ID starting with 8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724 not found: ID does not exist" containerID="8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.644015 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724"} err="failed to get container status \"8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724\": rpc error: code = NotFound desc = could not find container \"8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724\": container with ID starting with 8a1e40decca6f8fb72323d54f91955996774d8816038d86de0529b1e1d58d724 not found: ID does not exist" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.673491 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.946272037 podStartE2EDuration="34.673467092s" podCreationTimestamp="2025-12-05 14:13:13 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.748456028 +0000 UTC m=+1010.296054167" lastFinishedPulling="2025-12-05 14:13:46.475651083 +0000 UTC m=+1035.023249222" observedRunningTime="2025-12-05 14:13:47.642813329 +0000 UTC m=+1036.190411468" watchObservedRunningTime="2025-12-05 14:13:47.673467092 +0000 UTC m=+1036.221065231" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.688004 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b9cbccdc-86495"] Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.694888 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74b9cbccdc-86495"] Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.748280 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78d44df849-7lnbz"] Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.897503 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 05 14:13:47 crc kubenswrapper[4858]: E1205 14:13:47.898020 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerName="init" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.898038 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerName="init" Dec 05 14:13:47 crc kubenswrapper[4858]: E1205 14:13:47.898068 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerName="dnsmasq-dns" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.898075 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerName="dnsmasq-dns" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.898238 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" containerName="dnsmasq-dns" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.904279 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.910390 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.910555 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.910726 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.910870 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7p77r" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.913451 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb68e5db-617e-469b-ada4-41ae2d186f8b" path="/var/lib/kubelet/pods/fb68e5db-617e-469b-ada4-41ae2d186f8b/volumes" Dec 05 14:13:47 crc kubenswrapper[4858]: I1205 14:13:47.914340 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.051442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dpd\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-kube-api-access-74dpd\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.051719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.051912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1732ed20-5466-4af7-995e-631a4111d81b-cache\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.052075 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1732ed20-5466-4af7-995e-631a4111d81b-lock\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.052228 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154016 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1732ed20-5466-4af7-995e-631a4111d81b-lock\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154111 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154200 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dpd\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-kube-api-access-74dpd\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154234 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154295 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1732ed20-5466-4af7-995e-631a4111d81b-cache\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1732ed20-5466-4af7-995e-631a4111d81b-lock\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: E1205 14:13:48.154610 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 14:13:48 crc kubenswrapper[4858]: E1205 14:13:48.154628 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 14:13:48 crc kubenswrapper[4858]: E1205 14:13:48.154671 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift podName:1732ed20-5466-4af7-995e-631a4111d81b nodeName:}" failed. No retries permitted until 2025-12-05 14:13:48.654654518 +0000 UTC m=+1037.202252657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift") pod "swift-storage-0" (UID: "1732ed20-5466-4af7-995e-631a4111d81b") : configmap "swift-ring-files" not found Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154727 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1732ed20-5466-4af7-995e-631a4111d81b-cache\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.154765 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.171529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dpd\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-kube-api-access-74dpd\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.183044 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.438572 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-bd95n"] Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.439523 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.443027 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.443776 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.445485 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.471330 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bd95n"] Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-combined-ca-bundle\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560240 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-ring-data-devices\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560583 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/10ff7965-e479-43a2-bf1f-403566f07367-etc-swift\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560626 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-swiftconf\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk4kh\" (UniqueName: \"kubernetes.io/projected/10ff7965-e479-43a2-bf1f-403566f07367-kube-api-access-dk4kh\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560758 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-scripts\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.560793 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-dispersionconf\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.594542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"709c2e19-3180-41ef-9341-df5e95e1733a","Type":"ContainerStarted","Data":"7286e5cc456014b26fd0b0bd794945ba30e65220e8b3a13d69ba38dbcad85278"} Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.595887 4858 generic.go:334] "Generic (PLEG): container finished" podID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerID="a8afbf9221979e13deb7a2c81c55edd5a1d5550f4fa8bd7832e731a644550976" exitCode=0 Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.595996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" event={"ID":"25eaa80d-4f7a-46ac-8f1a-4f497013d82f","Type":"ContainerDied","Data":"a8afbf9221979e13deb7a2c81c55edd5a1d5550f4fa8bd7832e731a644550976"} Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.596022 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" event={"ID":"25eaa80d-4f7a-46ac-8f1a-4f497013d82f","Type":"ContainerStarted","Data":"ec38a8224d680b3e5a709a103466f54469f60ac5687e5b26c2bd44d96a27e5f3"} Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662110 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/10ff7965-e479-43a2-bf1f-403566f07367-etc-swift\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-swiftconf\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk4kh\" (UniqueName: \"kubernetes.io/projected/10ff7965-e479-43a2-bf1f-403566f07367-kube-api-access-dk4kh\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662235 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-scripts\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-dispersionconf\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-combined-ca-bundle\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-ring-data-devices\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.662480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.663599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-scripts\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: E1205 14:13:48.665279 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 14:13:48 crc kubenswrapper[4858]: E1205 14:13:48.665297 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 14:13:48 crc kubenswrapper[4858]: E1205 14:13:48.665340 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift podName:1732ed20-5466-4af7-995e-631a4111d81b nodeName:}" failed. No retries permitted until 2025-12-05 14:13:49.665322487 +0000 UTC m=+1038.212920826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift") pod "swift-storage-0" (UID: "1732ed20-5466-4af7-995e-631a4111d81b") : configmap "swift-ring-files" not found Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.665586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-ring-data-devices\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.666959 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/10ff7965-e479-43a2-bf1f-403566f07367-etc-swift\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.674761 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-combined-ca-bundle\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.682926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-dispersionconf\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.683875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-swiftconf\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.695712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk4kh\" (UniqueName: \"kubernetes.io/projected/10ff7965-e479-43a2-bf1f-403566f07367-kube-api-access-dk4kh\") pod \"swift-ring-rebalance-bd95n\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:48 crc kubenswrapper[4858]: I1205 14:13:48.790031 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.295432 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-bd95n"] Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.491974 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.613014 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c4c61018-b6f5-488a-948c-7eacd25c0b8e","Type":"ContainerStarted","Data":"a3ea970488e93a00412c47ec464cc108d6ca436528c48b6786668a659d2b1d38"} Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.614415 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bd95n" event={"ID":"10ff7965-e479-43a2-bf1f-403566f07367","Type":"ContainerStarted","Data":"8b8f3e57552f7b9c6a8b69d9e8ab8d1bf491d21afc67b6438cb06ab210c234be"} Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.618787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" event={"ID":"25eaa80d-4f7a-46ac-8f1a-4f497013d82f","Type":"ContainerStarted","Data":"33e79e2565bc959ee0475babaa2920a19d72a32d53368ecaab4ae32b7261aec5"} Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.618943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.640042 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.335201634 podStartE2EDuration="39.640026172s" podCreationTimestamp="2025-12-05 14:13:10 +0000 UTC" firstStartedPulling="2025-12-05 14:13:24.39922312 +0000 UTC m=+1012.946821259" lastFinishedPulling="2025-12-05 14:13:48.704047658 +0000 UTC m=+1037.251645797" observedRunningTime="2025-12-05 14:13:49.638793749 +0000 UTC m=+1038.186391888" watchObservedRunningTime="2025-12-05 14:13:49.640026172 +0000 UTC m=+1038.187624311" Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.667187 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" podStartSLOduration=3.667169821 podStartE2EDuration="3.667169821s" podCreationTimestamp="2025-12-05 14:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:13:49.658115948 +0000 UTC m=+1038.205714087" watchObservedRunningTime="2025-12-05 14:13:49.667169821 +0000 UTC m=+1038.214767960" Dec 05 14:13:49 crc kubenswrapper[4858]: I1205 14:13:49.699399 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:49 crc kubenswrapper[4858]: E1205 14:13:49.699928 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 14:13:49 crc kubenswrapper[4858]: E1205 14:13:49.699955 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 14:13:49 crc kubenswrapper[4858]: E1205 14:13:49.700002 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift podName:1732ed20-5466-4af7-995e-631a4111d81b nodeName:}" failed. No retries permitted until 2025-12-05 14:13:51.699984193 +0000 UTC m=+1040.247582562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift") pod "swift-storage-0" (UID: "1732ed20-5466-4af7-995e-631a4111d81b") : configmap "swift-ring-files" not found Dec 05 14:13:50 crc kubenswrapper[4858]: I1205 14:13:50.479495 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:50 crc kubenswrapper[4858]: I1205 14:13:50.543761 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:50 crc kubenswrapper[4858]: I1205 14:13:50.665492 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 05 14:13:50 crc kubenswrapper[4858]: I1205 14:13:50.751508 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:51 crc kubenswrapper[4858]: I1205 14:13:51.739805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:51 crc kubenswrapper[4858]: E1205 14:13:51.741239 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 14:13:51 crc kubenswrapper[4858]: E1205 14:13:51.741256 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 14:13:51 crc kubenswrapper[4858]: E1205 14:13:51.741300 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift podName:1732ed20-5466-4af7-995e-631a4111d81b nodeName:}" failed. No retries permitted until 2025-12-05 14:13:55.741283595 +0000 UTC m=+1044.288881734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift") pod "swift-storage-0" (UID: "1732ed20-5466-4af7-995e-631a4111d81b") : configmap "swift-ring-files" not found Dec 05 14:13:51 crc kubenswrapper[4858]: I1205 14:13:51.750926 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:52 crc kubenswrapper[4858]: I1205 14:13:52.642555 4858 generic.go:334] "Generic (PLEG): container finished" podID="709c2e19-3180-41ef-9341-df5e95e1733a" containerID="7286e5cc456014b26fd0b0bd794945ba30e65220e8b3a13d69ba38dbcad85278" exitCode=0 Dec 05 14:13:52 crc kubenswrapper[4858]: I1205 14:13:52.643388 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"709c2e19-3180-41ef-9341-df5e95e1733a","Type":"ContainerDied","Data":"7286e5cc456014b26fd0b0bd794945ba30e65220e8b3a13d69ba38dbcad85278"} Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.651897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"805d1f07-ba33-4534-8fe0-3697049c2eb6","Type":"ContainerStarted","Data":"e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a"} Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.653035 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.653844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtl95" event={"ID":"07c39bc3-5d28-49a6-88b6-348d08f7b61a","Type":"ContainerStarted","Data":"756a922b05739a9eb18e90d9d400898147a9b6c85a3d3297c04f32590d8ed521"} Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.654021 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gtl95" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.656124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bd95n" event={"ID":"10ff7965-e479-43a2-bf1f-403566f07367","Type":"ContainerStarted","Data":"d51fb959c29e951c76a9b76b359d0a6ca6ca50f12f351de08e83c7daa5e268a0"} Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.657965 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e","Type":"ContainerStarted","Data":"0c94e38e0c6407cebbbc53c43ea9304aafd6e1a4ece231e2375378c43f8901fe"} Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.661299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"709c2e19-3180-41ef-9341-df5e95e1733a","Type":"ContainerStarted","Data":"d7bfe3e134a932d2fe75252dd90718e9a7304757b9be6bb625921393b7d776c1"} Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.714845 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.101921636 podStartE2EDuration="50.714812959s" podCreationTimestamp="2025-12-05 14:13:03 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.371134651 +0000 UTC m=+1009.918732790" lastFinishedPulling="2025-12-05 14:13:47.984025964 +0000 UTC m=+1036.531624113" observedRunningTime="2025-12-05 14:13:53.71336812 +0000 UTC m=+1042.260966259" watchObservedRunningTime="2025-12-05 14:13:53.714812959 +0000 UTC m=+1042.262411098" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.715760 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.690332836 podStartE2EDuration="47.715753915s" podCreationTimestamp="2025-12-05 14:13:06 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.242549647 +0000 UTC m=+1009.790147786" lastFinishedPulling="2025-12-05 14:13:53.267970726 +0000 UTC m=+1041.815568865" observedRunningTime="2025-12-05 14:13:53.683135634 +0000 UTC m=+1042.230734043" watchObservedRunningTime="2025-12-05 14:13:53.715753915 +0000 UTC m=+1042.263352044" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.738289 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-bd95n" podStartSLOduration=2.223898261 podStartE2EDuration="5.738272603s" podCreationTimestamp="2025-12-05 14:13:48 +0000 UTC" firstStartedPulling="2025-12-05 14:13:49.298507597 +0000 UTC m=+1037.846105736" lastFinishedPulling="2025-12-05 14:13:52.812881939 +0000 UTC m=+1041.360480078" observedRunningTime="2025-12-05 14:13:53.732794905 +0000 UTC m=+1042.280393054" watchObservedRunningTime="2025-12-05 14:13:53.738272603 +0000 UTC m=+1042.285870742" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.755606 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gtl95" podStartSLOduration=12.560732466 podStartE2EDuration="43.755591271s" podCreationTimestamp="2025-12-05 14:13:10 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.614640383 +0000 UTC m=+1010.162238522" lastFinishedPulling="2025-12-05 14:13:52.809499188 +0000 UTC m=+1041.357097327" observedRunningTime="2025-12-05 14:13:53.754927833 +0000 UTC m=+1042.302525972" watchObservedRunningTime="2025-12-05 14:13:53.755591271 +0000 UTC m=+1042.303189410" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.929788 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:53 crc kubenswrapper[4858]: I1205 14:13:53.973641 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.251243 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.252711 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.256366 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.256478 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wsdqx" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.256571 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.256662 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.290886 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.300916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrfn\" (UniqueName: \"kubernetes.io/projected/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-kube-api-access-7xrfn\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.301144 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.301263 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.301547 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-scripts\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.301663 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.301785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.301888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-config\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.403714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.403803 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-scripts\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.403864 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.403907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.403931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-config\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.404054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrfn\" (UniqueName: \"kubernetes.io/projected/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-kube-api-access-7xrfn\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.404075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.404676 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-scripts\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.405291 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.405843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-config\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.412626 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.423503 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.425119 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.472742 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrfn\" (UniqueName: \"kubernetes.io/projected/d1147ad4-1af3-4e6e-8b0d-a26db8d0af74-kube-api-access-7xrfn\") pod \"ovn-northd-0\" (UID: \"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74\") " pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.570287 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.691869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d99fd616-b195-4da7-b7ac-99bed8479e36","Type":"ContainerStarted","Data":"08ffecd9cc7a71d82d3e6577739e4a4afe4fee77374116ce3b8137d81627385f"} Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.701274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96d65651-be4c-475d-b4dc-293f42b30e39","Type":"ContainerStarted","Data":"61be820f5d8a6be7f6e3cb724ea744ed88d63cbcb4c7adb651339c6612a8ed84"} Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.807127 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.807545 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:54 crc kubenswrapper[4858]: I1205 14:13:54.895565 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 14:13:55 crc kubenswrapper[4858]: I1205 14:13:55.711070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74","Type":"ContainerStarted","Data":"07d50f017f97b69bfe33c549811e58e48945cb1470efadc567e644a86eec3095"} Dec 05 14:13:55 crc kubenswrapper[4858]: I1205 14:13:55.754552 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:13:55 crc kubenswrapper[4858]: E1205 14:13:55.754760 4858 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 14:13:55 crc kubenswrapper[4858]: E1205 14:13:55.754932 4858 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 14:13:55 crc kubenswrapper[4858]: E1205 14:13:55.754989 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift podName:1732ed20-5466-4af7-995e-631a4111d81b nodeName:}" failed. No retries permitted until 2025-12-05 14:14:03.754972924 +0000 UTC m=+1052.302571063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift") pod "swift-storage-0" (UID: "1732ed20-5466-4af7-995e-631a4111d81b") : configmap "swift-ring-files" not found Dec 05 14:13:56 crc kubenswrapper[4858]: I1205 14:13:56.719658 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74","Type":"ContainerStarted","Data":"5aa1d064d08fde90debd63cc1d1c87f9133aeef35735a6d48e1cd93f4dd45e20"} Dec 05 14:13:56 crc kubenswrapper[4858]: I1205 14:13:56.719939 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d1147ad4-1af3-4e6e-8b0d-a26db8d0af74","Type":"ContainerStarted","Data":"3ac7c6fa72bf3384303d8643a64a449d89a75bbca4d506920de4868f881909f9"} Dec 05 14:13:56 crc kubenswrapper[4858]: I1205 14:13:56.720088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 05 14:13:56 crc kubenswrapper[4858]: I1205 14:13:56.745532 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.88027964 podStartE2EDuration="2.745499888s" podCreationTimestamp="2025-12-05 14:13:54 +0000 UTC" firstStartedPulling="2025-12-05 14:13:54.912623024 +0000 UTC m=+1043.460221163" lastFinishedPulling="2025-12-05 14:13:55.777843272 +0000 UTC m=+1044.325441411" observedRunningTime="2025-12-05 14:13:56.742945569 +0000 UTC m=+1045.290543728" watchObservedRunningTime="2025-12-05 14:13:56.745499888 +0000 UTC m=+1045.293098037" Dec 05 14:13:56 crc kubenswrapper[4858]: I1205 14:13:56.917578 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:56 crc kubenswrapper[4858]: I1205 14:13:56.990331 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.166659 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.248405 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d45fc4855-kd46w"] Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.248907 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerName="dnsmasq-dns" containerID="cri-o://0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb" gracePeriod=10 Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.724265 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.727392 4858 generic.go:334] "Generic (PLEG): container finished" podID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerID="0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb" exitCode=0 Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.727454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" event={"ID":"e50a36d0-5f6f-49a0-92df-08fe6f997a4d","Type":"ContainerDied","Data":"0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb"} Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.727495 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.727519 4858 scope.go:117] "RemoveContainer" containerID="0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.727503 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d45fc4855-kd46w" event={"ID":"e50a36d0-5f6f-49a0-92df-08fe6f997a4d","Type":"ContainerDied","Data":"edb66ea29db6d018d57d1deb3710e3ef32ee075fae5b078e59cecb58f8d43ed0"} Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.750549 4858 scope.go:117] "RemoveContainer" containerID="f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.771420 4858 scope.go:117] "RemoveContainer" containerID="0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb" Dec 05 14:13:57 crc kubenswrapper[4858]: E1205 14:13:57.772343 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb\": container with ID starting with 0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb not found: ID does not exist" containerID="0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.772455 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb"} err="failed to get container status \"0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb\": rpc error: code = NotFound desc = could not find container \"0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb\": container with ID starting with 0c4c69efe1392cc56d571b517cf4cf3e7f33a79246f2693539559af2278c3adb not found: ID does not exist" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.772538 4858 scope.go:117] "RemoveContainer" containerID="f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f" Dec 05 14:13:57 crc kubenswrapper[4858]: E1205 14:13:57.773001 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f\": container with ID starting with f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f not found: ID does not exist" containerID="f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.773097 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f"} err="failed to get container status \"f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f\": rpc error: code = NotFound desc = could not find container \"f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f\": container with ID starting with f8f2232b4c221809cc49a66a91c059934f2a8e7c79e9e134c6892a92ccef969f not found: ID does not exist" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.806764 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfxk\" (UniqueName: \"kubernetes.io/projected/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-kube-api-access-mpfxk\") pod \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.806889 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-config\") pod \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.806914 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-dns-svc\") pod \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.806974 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb\") pod \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.807060 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-nb\") pod \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\" (UID: \"e50a36d0-5f6f-49a0-92df-08fe6f997a4d\") " Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.818189 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-kube-api-access-mpfxk" (OuterVolumeSpecName: "kube-api-access-mpfxk") pod "e50a36d0-5f6f-49a0-92df-08fe6f997a4d" (UID: "e50a36d0-5f6f-49a0-92df-08fe6f997a4d"). InnerVolumeSpecName "kube-api-access-mpfxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.854481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e50a36d0-5f6f-49a0-92df-08fe6f997a4d" (UID: "e50a36d0-5f6f-49a0-92df-08fe6f997a4d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.859594 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e50a36d0-5f6f-49a0-92df-08fe6f997a4d" (UID: "e50a36d0-5f6f-49a0-92df-08fe6f997a4d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.872296 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-config" (OuterVolumeSpecName: "config") pod "e50a36d0-5f6f-49a0-92df-08fe6f997a4d" (UID: "e50a36d0-5f6f-49a0-92df-08fe6f997a4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.906760 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e50a36d0-5f6f-49a0-92df-08fe6f997a4d" (UID: "e50a36d0-5f6f-49a0-92df-08fe6f997a4d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.910008 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.910038 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.910047 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.910059 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:57 crc kubenswrapper[4858]: I1205 14:13:57.910070 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpfxk\" (UniqueName: \"kubernetes.io/projected/e50a36d0-5f6f-49a0-92df-08fe6f997a4d-kube-api-access-mpfxk\") on node \"crc\" DevicePath \"\"" Dec 05 14:13:58 crc kubenswrapper[4858]: I1205 14:13:58.052560 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d45fc4855-kd46w"] Dec 05 14:13:58 crc kubenswrapper[4858]: I1205 14:13:58.058639 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d45fc4855-kd46w"] Dec 05 14:13:59 crc kubenswrapper[4858]: I1205 14:13:59.742716 4858 generic.go:334] "Generic (PLEG): container finished" podID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerID="0c94e38e0c6407cebbbc53c43ea9304aafd6e1a4ece231e2375378c43f8901fe" exitCode=0 Dec 05 14:13:59 crc kubenswrapper[4858]: I1205 14:13:59.742851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e","Type":"ContainerDied","Data":"0c94e38e0c6407cebbbc53c43ea9304aafd6e1a4ece231e2375378c43f8901fe"} Dec 05 14:13:59 crc kubenswrapper[4858]: I1205 14:13:59.911679 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" path="/var/lib/kubelet/pods/e50a36d0-5f6f-49a0-92df-08fe6f997a4d/volumes" Dec 05 14:14:00 crc kubenswrapper[4858]: I1205 14:14:00.750944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e","Type":"ContainerStarted","Data":"4486ae1d027ec02849dcbaaef9604147087bf2cf131fa4e861bc6c695cddbdb1"} Dec 05 14:14:00 crc kubenswrapper[4858]: I1205 14:14:00.792627 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371977.062181 podStartE2EDuration="59.792594119s" podCreationTimestamp="2025-12-05 14:13:01 +0000 UTC" firstStartedPulling="2025-12-05 14:13:20.784611435 +0000 UTC m=+1009.332209574" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:14:00.775739714 +0000 UTC m=+1049.323337853" watchObservedRunningTime="2025-12-05 14:14:00.792594119 +0000 UTC m=+1049.340192258" Dec 05 14:14:02 crc kubenswrapper[4858]: I1205 14:14:02.766306 4858 generic.go:334] "Generic (PLEG): container finished" podID="10ff7965-e479-43a2-bf1f-403566f07367" containerID="d51fb959c29e951c76a9b76b359d0a6ca6ca50f12f351de08e83c7daa5e268a0" exitCode=0 Dec 05 14:14:02 crc kubenswrapper[4858]: I1205 14:14:02.766383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bd95n" event={"ID":"10ff7965-e479-43a2-bf1f-403566f07367","Type":"ContainerDied","Data":"d51fb959c29e951c76a9b76b359d0a6ca6ca50f12f351de08e83c7daa5e268a0"} Dec 05 14:14:03 crc kubenswrapper[4858]: I1205 14:14:03.077629 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 14:14:03 crc kubenswrapper[4858]: I1205 14:14:03.077900 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 14:14:03 crc kubenswrapper[4858]: I1205 14:14:03.808384 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:14:03 crc kubenswrapper[4858]: I1205 14:14:03.823512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1732ed20-5466-4af7-995e-631a4111d81b-etc-swift\") pod \"swift-storage-0\" (UID: \"1732ed20-5466-4af7-995e-631a4111d81b\") " pod="openstack/swift-storage-0" Dec 05 14:14:03 crc kubenswrapper[4858]: I1205 14:14:03.834855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.171722 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.213517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-swiftconf\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.213724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-scripts\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.213845 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-dispersionconf\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.213872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk4kh\" (UniqueName: \"kubernetes.io/projected/10ff7965-e479-43a2-bf1f-403566f07367-kube-api-access-dk4kh\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.213963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/10ff7965-e479-43a2-bf1f-403566f07367-etc-swift\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.214451 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-combined-ca-bundle\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.214489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-ring-data-devices\") pod \"10ff7965-e479-43a2-bf1f-403566f07367\" (UID: \"10ff7965-e479-43a2-bf1f-403566f07367\") " Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.215225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10ff7965-e479-43a2-bf1f-403566f07367-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.216064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.216379 4858 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/10ff7965-e479-43a2-bf1f-403566f07367-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.216484 4858 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.498942 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10ff7965-e479-43a2-bf1f-403566f07367-kube-api-access-dk4kh" (OuterVolumeSpecName: "kube-api-access-dk4kh") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "kube-api-access-dk4kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.500782 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-scripts" (OuterVolumeSpecName: "scripts") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.503397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.504961 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.505262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "10ff7965-e479-43a2-bf1f-403566f07367" (UID: "10ff7965-e479-43a2-bf1f-403566f07367"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.521730 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.521758 4858 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.521766 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10ff7965-e479-43a2-bf1f-403566f07367-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.521774 4858 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/10ff7965-e479-43a2-bf1f-403566f07367-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.521783 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk4kh\" (UniqueName: \"kubernetes.io/projected/10ff7965-e479-43a2-bf1f-403566f07367-kube-api-access-dk4kh\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.628235 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 14:14:04 crc kubenswrapper[4858]: W1205 14:14:04.634797 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1732ed20_5466_4af7_995e_631a4111d81b.slice/crio-f736b5d7844951175e6d67b8d1cbb6c3416c88b72ba6d97fa7a94c079e00e810 WatchSource:0}: Error finding container f736b5d7844951175e6d67b8d1cbb6c3416c88b72ba6d97fa7a94c079e00e810: Status 404 returned error can't find the container with id f736b5d7844951175e6d67b8d1cbb6c3416c88b72ba6d97fa7a94c079e00e810 Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.782621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-bd95n" event={"ID":"10ff7965-e479-43a2-bf1f-403566f07367","Type":"ContainerDied","Data":"8b8f3e57552f7b9c6a8b69d9e8ab8d1bf491d21afc67b6438cb06ab210c234be"} Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.782663 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8f3e57552f7b9c6a8b69d9e8ab8d1bf491d21afc67b6438cb06ab210c234be" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.782723 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-bd95n" Dec 05 14:14:04 crc kubenswrapper[4858]: I1205 14:14:04.788702 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"f736b5d7844951175e6d67b8d1cbb6c3416c88b72ba6d97fa7a94c079e00e810"} Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.146984 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.223139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.538740 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5lcmn"] Dec 05 14:14:05 crc kubenswrapper[4858]: E1205 14:14:05.539130 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10ff7965-e479-43a2-bf1f-403566f07367" containerName="swift-ring-rebalance" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.539151 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10ff7965-e479-43a2-bf1f-403566f07367" containerName="swift-ring-rebalance" Dec 05 14:14:05 crc kubenswrapper[4858]: E1205 14:14:05.539182 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerName="dnsmasq-dns" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.539188 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerName="dnsmasq-dns" Dec 05 14:14:05 crc kubenswrapper[4858]: E1205 14:14:05.539206 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerName="init" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.539213 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerName="init" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.539363 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50a36d0-5f6f-49a0-92df-08fe6f997a4d" containerName="dnsmasq-dns" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.539386 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="10ff7965-e479-43a2-bf1f-403566f07367" containerName="swift-ring-rebalance" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.541639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.545081 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5lcmn"] Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.650964 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ca40e9-5047-4404-a875-cae910187c3b-operator-scripts\") pod \"glance-db-create-5lcmn\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.651036 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqj9s\" (UniqueName: \"kubernetes.io/projected/77ca40e9-5047-4404-a875-cae910187c3b-kube-api-access-wqj9s\") pod \"glance-db-create-5lcmn\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.754201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ca40e9-5047-4404-a875-cae910187c3b-operator-scripts\") pod \"glance-db-create-5lcmn\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.754257 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqj9s\" (UniqueName: \"kubernetes.io/projected/77ca40e9-5047-4404-a875-cae910187c3b-kube-api-access-wqj9s\") pod \"glance-db-create-5lcmn\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.755266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ca40e9-5047-4404-a875-cae910187c3b-operator-scripts\") pod \"glance-db-create-5lcmn\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.788722 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqj9s\" (UniqueName: \"kubernetes.io/projected/77ca40e9-5047-4404-a875-cae910187c3b-kube-api-access-wqj9s\") pod \"glance-db-create-5lcmn\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.799406 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ba7f-account-create-update-56thq"] Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.800503 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.804888 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.808610 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"44ca9a941e74239cb7d4d1fef6181725f7f84e00d9343ece07476bc80b19a78b"} Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.832777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ba7f-account-create-update-56thq"] Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.855334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7pqm\" (UniqueName: \"kubernetes.io/projected/8bf192c4-1689-4d05-8653-7841a5dbbdd0-kube-api-access-f7pqm\") pod \"glance-ba7f-account-create-update-56thq\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.855475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bf192c4-1689-4d05-8653-7841a5dbbdd0-operator-scripts\") pod \"glance-ba7f-account-create-update-56thq\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.876510 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.957199 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7pqm\" (UniqueName: \"kubernetes.io/projected/8bf192c4-1689-4d05-8653-7841a5dbbdd0-kube-api-access-f7pqm\") pod \"glance-ba7f-account-create-update-56thq\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.957334 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bf192c4-1689-4d05-8653-7841a5dbbdd0-operator-scripts\") pod \"glance-ba7f-account-create-update-56thq\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.958343 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bf192c4-1689-4d05-8653-7841a5dbbdd0-operator-scripts\") pod \"glance-ba7f-account-create-update-56thq\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:05 crc kubenswrapper[4858]: I1205 14:14:05.976270 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7pqm\" (UniqueName: \"kubernetes.io/projected/8bf192c4-1689-4d05-8653-7841a5dbbdd0-kube-api-access-f7pqm\") pod \"glance-ba7f-account-create-update-56thq\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.128799 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:06 crc kubenswrapper[4858]: W1205 14:14:06.585964 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ca40e9_5047_4404_a875_cae910187c3b.slice/crio-5608271c16d0dcebf6a6b7eaa9e56dc4c26ced7ef73215f358cbadb78c36df88 WatchSource:0}: Error finding container 5608271c16d0dcebf6a6b7eaa9e56dc4c26ced7ef73215f358cbadb78c36df88: Status 404 returned error can't find the container with id 5608271c16d0dcebf6a6b7eaa9e56dc4c26ced7ef73215f358cbadb78c36df88 Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.590316 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5lcmn"] Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.590705 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.688343 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ba7f-account-create-update-56thq"] Dec 05 14:14:06 crc kubenswrapper[4858]: W1205 14:14:06.706392 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bf192c4_1689_4d05_8653_7841a5dbbdd0.slice/crio-0e6dec0021d5ddaa020da65e7afc51c74573b0579831f9ab167751a286ec5674 WatchSource:0}: Error finding container 0e6dec0021d5ddaa020da65e7afc51c74573b0579831f9ab167751a286ec5674: Status 404 returned error can't find the container with id 0e6dec0021d5ddaa020da65e7afc51c74573b0579831f9ab167751a286ec5674 Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.817063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5lcmn" event={"ID":"77ca40e9-5047-4404-a875-cae910187c3b","Type":"ContainerStarted","Data":"d130387f25faf6d27b9c3053479efd55a3df45f009bc151467e65137b8b82b79"} Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.817108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5lcmn" event={"ID":"77ca40e9-5047-4404-a875-cae910187c3b","Type":"ContainerStarted","Data":"5608271c16d0dcebf6a6b7eaa9e56dc4c26ced7ef73215f358cbadb78c36df88"} Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.823513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"e37554352aa4f382d4abfe91f2f40ba6129e1ff84d88ab66c30c62b898fdeb0e"} Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.823567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"f494853c21ff34ef21f91f17c44307ae14dd8ad60ca84d92f76669fafd037059"} Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.823580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"92cd0e2e181d5b608fd4c16810d315ecc3a350da6c58290cbef58c0060c00275"} Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.832859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ba7f-account-create-update-56thq" event={"ID":"8bf192c4-1689-4d05-8653-7841a5dbbdd0","Type":"ContainerStarted","Data":"0e6dec0021d5ddaa020da65e7afc51c74573b0579831f9ab167751a286ec5674"} Dec 05 14:14:06 crc kubenswrapper[4858]: I1205 14:14:06.839255 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-5lcmn" podStartSLOduration=1.8392339789999999 podStartE2EDuration="1.839233979s" podCreationTimestamp="2025-12-05 14:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:14:06.836477824 +0000 UTC m=+1055.384075963" watchObservedRunningTime="2025-12-05 14:14:06.839233979 +0000 UTC m=+1055.386832118" Dec 05 14:14:07 crc kubenswrapper[4858]: I1205 14:14:07.855310 4858 generic.go:334] "Generic (PLEG): container finished" podID="77ca40e9-5047-4404-a875-cae910187c3b" containerID="d130387f25faf6d27b9c3053479efd55a3df45f009bc151467e65137b8b82b79" exitCode=0 Dec 05 14:14:07 crc kubenswrapper[4858]: I1205 14:14:07.855959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5lcmn" event={"ID":"77ca40e9-5047-4404-a875-cae910187c3b","Type":"ContainerDied","Data":"d130387f25faf6d27b9c3053479efd55a3df45f009bc151467e65137b8b82b79"} Dec 05 14:14:07 crc kubenswrapper[4858]: I1205 14:14:07.870621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"b789d4a6515b2911ec6e65df837e6de87d4d702840121a204cb3a6c7a12cb39a"} Dec 05 14:14:07 crc kubenswrapper[4858]: I1205 14:14:07.870685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"3378b13a51b7e5792f9acb920572bbc4cdaf967ec8c459e61a6ec4f10fe82622"} Dec 05 14:14:07 crc kubenswrapper[4858]: I1205 14:14:07.875290 4858 generic.go:334] "Generic (PLEG): container finished" podID="8bf192c4-1689-4d05-8653-7841a5dbbdd0" containerID="e2625d9407e758869df51d6cde3d25b335ed5c108cca30e228420e67d53e6ca6" exitCode=0 Dec 05 14:14:07 crc kubenswrapper[4858]: I1205 14:14:07.875331 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ba7f-account-create-update-56thq" event={"ID":"8bf192c4-1689-4d05-8653-7841a5dbbdd0","Type":"ContainerDied","Data":"e2625d9407e758869df51d6cde3d25b335ed5c108cca30e228420e67d53e6ca6"} Dec 05 14:14:08 crc kubenswrapper[4858]: I1205 14:14:08.900238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"dbf78edd79abbf77f946945f5ae452e02e93f6aa6e29dc3e2ba47456574a9d16"} Dec 05 14:14:08 crc kubenswrapper[4858]: I1205 14:14:08.900285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"f7d47aac7327a98bd4f2091b20d714e5f8b80fd4cc3b80a000d02c9203d2d907"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.325618 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.340904 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.445557 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ca40e9-5047-4404-a875-cae910187c3b-operator-scripts\") pod \"77ca40e9-5047-4404-a875-cae910187c3b\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.445897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7pqm\" (UniqueName: \"kubernetes.io/projected/8bf192c4-1689-4d05-8653-7841a5dbbdd0-kube-api-access-f7pqm\") pod \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.446022 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bf192c4-1689-4d05-8653-7841a5dbbdd0-operator-scripts\") pod \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\" (UID: \"8bf192c4-1689-4d05-8653-7841a5dbbdd0\") " Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.446175 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqj9s\" (UniqueName: \"kubernetes.io/projected/77ca40e9-5047-4404-a875-cae910187c3b-kube-api-access-wqj9s\") pod \"77ca40e9-5047-4404-a875-cae910187c3b\" (UID: \"77ca40e9-5047-4404-a875-cae910187c3b\") " Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.446804 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf192c4-1689-4d05-8653-7841a5dbbdd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8bf192c4-1689-4d05-8653-7841a5dbbdd0" (UID: "8bf192c4-1689-4d05-8653-7841a5dbbdd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.447106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77ca40e9-5047-4404-a875-cae910187c3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77ca40e9-5047-4404-a875-cae910187c3b" (UID: "77ca40e9-5047-4404-a875-cae910187c3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.452005 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf192c4-1689-4d05-8653-7841a5dbbdd0-kube-api-access-f7pqm" (OuterVolumeSpecName: "kube-api-access-f7pqm") pod "8bf192c4-1689-4d05-8653-7841a5dbbdd0" (UID: "8bf192c4-1689-4d05-8653-7841a5dbbdd0"). InnerVolumeSpecName "kube-api-access-f7pqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.453773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ca40e9-5047-4404-a875-cae910187c3b-kube-api-access-wqj9s" (OuterVolumeSpecName: "kube-api-access-wqj9s") pod "77ca40e9-5047-4404-a875-cae910187c3b" (UID: "77ca40e9-5047-4404-a875-cae910187c3b"). InnerVolumeSpecName "kube-api-access-wqj9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.548416 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77ca40e9-5047-4404-a875-cae910187c3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.548457 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7pqm\" (UniqueName: \"kubernetes.io/projected/8bf192c4-1689-4d05-8653-7841a5dbbdd0-kube-api-access-f7pqm\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.548470 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8bf192c4-1689-4d05-8653-7841a5dbbdd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.548482 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqj9s\" (UniqueName: \"kubernetes.io/projected/77ca40e9-5047-4404-a875-cae910187c3b-kube-api-access-wqj9s\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.653894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.913564 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5lcmn" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.917437 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5lcmn" event={"ID":"77ca40e9-5047-4404-a875-cae910187c3b","Type":"ContainerDied","Data":"5608271c16d0dcebf6a6b7eaa9e56dc4c26ced7ef73215f358cbadb78c36df88"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.917473 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5608271c16d0dcebf6a6b7eaa9e56dc4c26ced7ef73215f358cbadb78c36df88" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.951565 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"48605e6f95b60273b00968f43641a864f394c019fa0c6e04a0f1d6eb83000622"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.951605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"5537a32997b05aef82be1f3b61e93b3e33bf7e0fe7468c605e508261ea8bc225"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.951617 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"f473e750469f83e7c5717dd4d25540b66eece8645dcdfdc006ec8854eae0e97d"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.951633 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"58ffff034cdc6d7fabf3cf0312a55cf8f610eb35766cabeb954daed9331372a0"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.958092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ba7f-account-create-update-56thq" event={"ID":"8bf192c4-1689-4d05-8653-7841a5dbbdd0","Type":"ContainerDied","Data":"0e6dec0021d5ddaa020da65e7afc51c74573b0579831f9ab167751a286ec5674"} Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.958145 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ba7f-account-create-update-56thq" Dec 05 14:14:09 crc kubenswrapper[4858]: I1205 14:14:09.958149 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6dec0021d5ddaa020da65e7afc51c74573b0579831f9ab167751a286ec5674" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.957712 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cl2mg"] Dec 05 14:14:10 crc kubenswrapper[4858]: E1205 14:14:10.958328 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ca40e9-5047-4404-a875-cae910187c3b" containerName="mariadb-database-create" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.958341 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ca40e9-5047-4404-a875-cae910187c3b" containerName="mariadb-database-create" Dec 05 14:14:10 crc kubenswrapper[4858]: E1205 14:14:10.958354 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf192c4-1689-4d05-8653-7841a5dbbdd0" containerName="mariadb-account-create-update" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.958360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf192c4-1689-4d05-8653-7841a5dbbdd0" containerName="mariadb-account-create-update" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.958533 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf192c4-1689-4d05-8653-7841a5dbbdd0" containerName="mariadb-account-create-update" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.958555 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ca40e9-5047-4404-a875-cae910187c3b" containerName="mariadb-database-create" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.959118 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.961310 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tfbpg" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.961606 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.980796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"1f0faaa83e54defa714451f10b421b5cec3751300c20211ceb703bc13d018999"} Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.981034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"051fde9bde57e799d8d3a8c205e48de7babb28bf46bd24a71a0847948670672b"} Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.981163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1732ed20-5466-4af7-995e-631a4111d81b","Type":"ContainerStarted","Data":"64881b403ec1a692b3c4bb21e22cb41de1f81450de611d8fc3c29eda8f6956ea"} Dec 05 14:14:10 crc kubenswrapper[4858]: I1205 14:14:10.982742 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl2mg"] Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.010558 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.030280 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.717915048 podStartE2EDuration="25.030257497s" podCreationTimestamp="2025-12-05 14:13:46 +0000 UTC" firstStartedPulling="2025-12-05 14:14:04.638289879 +0000 UTC m=+1053.185888018" lastFinishedPulling="2025-12-05 14:14:08.950632318 +0000 UTC m=+1057.498230467" observedRunningTime="2025-12-05 14:14:11.022641901 +0000 UTC m=+1059.570240050" watchObservedRunningTime="2025-12-05 14:14:11.030257497 +0000 UTC m=+1059.577855636" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.061992 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kk9tz" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.073618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctgk\" (UniqueName: \"kubernetes.io/projected/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-kube-api-access-5ctgk\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.073790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-combined-ca-bundle\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.073865 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-config-data\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.073887 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-db-sync-config-data\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.175702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-combined-ca-bundle\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.175871 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-config-data\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.175909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-db-sync-config-data\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.175945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctgk\" (UniqueName: \"kubernetes.io/projected/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-kube-api-access-5ctgk\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.180858 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-config-data\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.189976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-combined-ca-bundle\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.190526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-db-sync-config-data\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.194372 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctgk\" (UniqueName: \"kubernetes.io/projected/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-kube-api-access-5ctgk\") pod \"glance-db-sync-cl2mg\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.261243 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gtl95-config-lbfkw"] Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.262666 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.265011 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.275835 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.280154 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtl95-config-lbfkw"] Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.392958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-additional-scripts\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.393023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-scripts\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.393118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run-ovn\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.393142 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw5mr\" (UniqueName: \"kubernetes.io/projected/87aeea0e-c855-44db-b1da-ec60c0436310-kube-api-access-hw5mr\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.393215 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.393244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-log-ovn\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.398052 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-768c5cd5f7-4pfv4"] Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.401523 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.407345 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.416297 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768c5cd5f7-4pfv4"] Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494731 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-nb\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-config\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494835 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run-ovn\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494912 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw5mr\" (UniqueName: \"kubernetes.io/projected/87aeea0e-c855-44db-b1da-ec60c0436310-kube-api-access-hw5mr\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494947 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-svc\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-sb\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.494998 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjmgt\" (UniqueName: \"kubernetes.io/projected/836ed005-6d52-439f-8a6c-8bdd848fbb4f-kube-api-access-pjmgt\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.495025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.495043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-log-ovn\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.495064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-swift-storage-0\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.495096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-additional-scripts\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.495121 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-scripts\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.495521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run-ovn\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.496287 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-log-ovn\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.496393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.498552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-additional-scripts\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.499979 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-scripts\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.523608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw5mr\" (UniqueName: \"kubernetes.io/projected/87aeea0e-c855-44db-b1da-ec60c0436310-kube-api-access-hw5mr\") pod \"ovn-controller-gtl95-config-lbfkw\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.599983 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-nb\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.600095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-config\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.600171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-svc\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.600216 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-sb\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.600242 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjmgt\" (UniqueName: \"kubernetes.io/projected/836ed005-6d52-439f-8a6c-8bdd848fbb4f-kube-api-access-pjmgt\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.600288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-swift-storage-0\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.600970 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-nb\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.601088 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-config\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.601289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-swift-storage-0\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.601437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-sb\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.601562 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-svc\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.622048 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjmgt\" (UniqueName: \"kubernetes.io/projected/836ed005-6d52-439f-8a6c-8bdd848fbb4f-kube-api-access-pjmgt\") pod \"dnsmasq-dns-768c5cd5f7-4pfv4\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.666731 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.753911 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:11 crc kubenswrapper[4858]: I1205 14:14:11.945002 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cl2mg"] Dec 05 14:14:11 crc kubenswrapper[4858]: W1205 14:14:11.947394 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48dfcb42_ecb6_463d_9e5f_ddbf758dfee3.slice/crio-24cac268ecbaf045c5409fe966887ef070f70d44da17562e406d6aef6fef5bd0 WatchSource:0}: Error finding container 24cac268ecbaf045c5409fe966887ef070f70d44da17562e406d6aef6fef5bd0: Status 404 returned error can't find the container with id 24cac268ecbaf045c5409fe966887ef070f70d44da17562e406d6aef6fef5bd0 Dec 05 14:14:12 crc kubenswrapper[4858]: I1205 14:14:12.003494 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl2mg" event={"ID":"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3","Type":"ContainerStarted","Data":"24cac268ecbaf045c5409fe966887ef070f70d44da17562e406d6aef6fef5bd0"} Dec 05 14:14:12 crc kubenswrapper[4858]: I1205 14:14:12.113123 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gtl95-config-lbfkw"] Dec 05 14:14:12 crc kubenswrapper[4858]: W1205 14:14:12.120274 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87aeea0e_c855_44db_b1da_ec60c0436310.slice/crio-554c3c530589090389a602f162d6e903e80fcd5335531f3fff7c7d52b954ba44 WatchSource:0}: Error finding container 554c3c530589090389a602f162d6e903e80fcd5335531f3fff7c7d52b954ba44: Status 404 returned error can't find the container with id 554c3c530589090389a602f162d6e903e80fcd5335531f3fff7c7d52b954ba44 Dec 05 14:14:12 crc kubenswrapper[4858]: W1205 14:14:12.253803 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod836ed005_6d52_439f_8a6c_8bdd848fbb4f.slice/crio-192d68516bd8c9c87e5cc637f48e237e99222e93022e6ab1c003e8b5f5a354a5 WatchSource:0}: Error finding container 192d68516bd8c9c87e5cc637f48e237e99222e93022e6ab1c003e8b5f5a354a5: Status 404 returned error can't find the container with id 192d68516bd8c9c87e5cc637f48e237e99222e93022e6ab1c003e8b5f5a354a5 Dec 05 14:14:12 crc kubenswrapper[4858]: I1205 14:14:12.260046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-768c5cd5f7-4pfv4"] Dec 05 14:14:13 crc kubenswrapper[4858]: I1205 14:14:13.019205 4858 generic.go:334] "Generic (PLEG): container finished" podID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerID="f9eea525a3924e2e2df72be9bbbff99540f8d50df6eb70061584e4f0453a7996" exitCode=0 Dec 05 14:14:13 crc kubenswrapper[4858]: I1205 14:14:13.019752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" event={"ID":"836ed005-6d52-439f-8a6c-8bdd848fbb4f","Type":"ContainerDied","Data":"f9eea525a3924e2e2df72be9bbbff99540f8d50df6eb70061584e4f0453a7996"} Dec 05 14:14:13 crc kubenswrapper[4858]: I1205 14:14:13.019779 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" event={"ID":"836ed005-6d52-439f-8a6c-8bdd848fbb4f","Type":"ContainerStarted","Data":"192d68516bd8c9c87e5cc637f48e237e99222e93022e6ab1c003e8b5f5a354a5"} Dec 05 14:14:13 crc kubenswrapper[4858]: I1205 14:14:13.021880 4858 generic.go:334] "Generic (PLEG): container finished" podID="87aeea0e-c855-44db-b1da-ec60c0436310" containerID="7a3e9621021bd52e7dd7b1554b8aadfcd9ad6136a7b3323b6189ac55d0c46516" exitCode=0 Dec 05 14:14:13 crc kubenswrapper[4858]: I1205 14:14:13.021923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtl95-config-lbfkw" event={"ID":"87aeea0e-c855-44db-b1da-ec60c0436310","Type":"ContainerDied","Data":"7a3e9621021bd52e7dd7b1554b8aadfcd9ad6136a7b3323b6189ac55d0c46516"} Dec 05 14:14:13 crc kubenswrapper[4858]: I1205 14:14:13.021950 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtl95-config-lbfkw" event={"ID":"87aeea0e-c855-44db-b1da-ec60c0436310","Type":"ContainerStarted","Data":"554c3c530589090389a602f162d6e903e80fcd5335531f3fff7c7d52b954ba44"} Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.335178 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.455620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-scripts\") pod \"87aeea0e-c855-44db-b1da-ec60c0436310\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.455671 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-log-ovn\") pod \"87aeea0e-c855-44db-b1da-ec60c0436310\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.455781 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run\") pod \"87aeea0e-c855-44db-b1da-ec60c0436310\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.455809 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run-ovn\") pod \"87aeea0e-c855-44db-b1da-ec60c0436310\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.455864 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw5mr\" (UniqueName: \"kubernetes.io/projected/87aeea0e-c855-44db-b1da-ec60c0436310-kube-api-access-hw5mr\") pod \"87aeea0e-c855-44db-b1da-ec60c0436310\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.456098 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-additional-scripts\") pod \"87aeea0e-c855-44db-b1da-ec60c0436310\" (UID: \"87aeea0e-c855-44db-b1da-ec60c0436310\") " Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.456525 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run" (OuterVolumeSpecName: "var-run") pod "87aeea0e-c855-44db-b1da-ec60c0436310" (UID: "87aeea0e-c855-44db-b1da-ec60c0436310"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.456584 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "87aeea0e-c855-44db-b1da-ec60c0436310" (UID: "87aeea0e-c855-44db-b1da-ec60c0436310"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.456585 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "87aeea0e-c855-44db-b1da-ec60c0436310" (UID: "87aeea0e-c855-44db-b1da-ec60c0436310"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.457073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "87aeea0e-c855-44db-b1da-ec60c0436310" (UID: "87aeea0e-c855-44db-b1da-ec60c0436310"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.457283 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-scripts" (OuterVolumeSpecName: "scripts") pod "87aeea0e-c855-44db-b1da-ec60c0436310" (UID: "87aeea0e-c855-44db-b1da-ec60c0436310"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.460259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87aeea0e-c855-44db-b1da-ec60c0436310-kube-api-access-hw5mr" (OuterVolumeSpecName: "kube-api-access-hw5mr") pod "87aeea0e-c855-44db-b1da-ec60c0436310" (UID: "87aeea0e-c855-44db-b1da-ec60c0436310"). InnerVolumeSpecName "kube-api-access-hw5mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.558149 4858 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.558205 4858 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.558224 4858 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/87aeea0e-c855-44db-b1da-ec60c0436310-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.558237 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw5mr\" (UniqueName: \"kubernetes.io/projected/87aeea0e-c855-44db-b1da-ec60c0436310-kube-api-access-hw5mr\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.558252 4858 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.558263 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87aeea0e-c855-44db-b1da-ec60c0436310-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.761075 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.761137 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.761200 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.761773 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5e80f882b080532d912d4ccb8829cb93a92e3352e086e2ac39b582773b7cafa"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.761852 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://e5e80f882b080532d912d4ccb8829cb93a92e3352e086e2ac39b582773b7cafa" gracePeriod=600 Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.833519 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5lwkq"] Dec 05 14:14:14 crc kubenswrapper[4858]: E1205 14:14:14.833888 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87aeea0e-c855-44db-b1da-ec60c0436310" containerName="ovn-config" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.833899 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="87aeea0e-c855-44db-b1da-ec60c0436310" containerName="ovn-config" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.834082 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="87aeea0e-c855-44db-b1da-ec60c0436310" containerName="ovn-config" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.834603 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.844216 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5lwkq"] Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.976895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-operator-scripts\") pod \"keystone-db-create-5lwkq\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:14 crc kubenswrapper[4858]: I1205 14:14:14.976983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4wgs\" (UniqueName: \"kubernetes.io/projected/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-kube-api-access-s4wgs\") pod \"keystone-db-create-5lwkq\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.026164 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-266f-account-create-update-dhqgj"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.027695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.029978 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.051463 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gtl95-config-lbfkw" event={"ID":"87aeea0e-c855-44db-b1da-ec60c0436310","Type":"ContainerDied","Data":"554c3c530589090389a602f162d6e903e80fcd5335531f3fff7c7d52b954ba44"} Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.051511 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="554c3c530589090389a602f162d6e903e80fcd5335531f3fff7c7d52b954ba44" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.051581 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gtl95-config-lbfkw" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.065187 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-266f-account-create-update-dhqgj"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.072861 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" event={"ID":"836ed005-6d52-439f-8a6c-8bdd848fbb4f","Type":"ContainerStarted","Data":"c437a28377e814bd2e0cd420838385ae30183cf8db086cc453bd6966683ee0cc"} Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.073896 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.078030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-operator-scripts\") pod \"keystone-db-create-5lwkq\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.078112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4wgs\" (UniqueName: \"kubernetes.io/projected/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-kube-api-access-s4wgs\") pod \"keystone-db-create-5lwkq\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.078723 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-operator-scripts\") pod \"keystone-db-create-5lwkq\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.079878 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="e5e80f882b080532d912d4ccb8829cb93a92e3352e086e2ac39b582773b7cafa" exitCode=0 Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.079922 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"e5e80f882b080532d912d4ccb8829cb93a92e3352e086e2ac39b582773b7cafa"} Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.079952 4858 scope.go:117] "RemoveContainer" containerID="aeb26ce2f72c5b27c0b5939e948f7b4c1c734a8dc5b04d0306f5422f039d5f18" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.117434 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" podStartSLOduration=4.1174126 podStartE2EDuration="4.1174126s" podCreationTimestamp="2025-12-05 14:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:14:15.091041247 +0000 UTC m=+1063.638639406" watchObservedRunningTime="2025-12-05 14:14:15.1174126 +0000 UTC m=+1063.665010739" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.138504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4wgs\" (UniqueName: \"kubernetes.io/projected/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-kube-api-access-s4wgs\") pod \"keystone-db-create-5lwkq\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.169127 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wj5nl"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.170226 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.180378 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wj5nl"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.180799 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzxm\" (UniqueName: \"kubernetes.io/projected/b28ac28e-619d-499c-bc7a-4baa5f06abe9-kube-api-access-5qzxm\") pod \"keystone-266f-account-create-update-dhqgj\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.180983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b28ac28e-619d-499c-bc7a-4baa5f06abe9-operator-scripts\") pod \"keystone-266f-account-create-update-dhqgj\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.195959 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d17-account-create-update-pjgn4"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.197460 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.198618 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.202064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.219383 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d17-account-create-update-pjgn4"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.290445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-operator-scripts\") pod \"placement-7d17-account-create-update-pjgn4\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.290501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzxm\" (UniqueName: \"kubernetes.io/projected/b28ac28e-619d-499c-bc7a-4baa5f06abe9-kube-api-access-5qzxm\") pod \"keystone-266f-account-create-update-dhqgj\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.290612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53dc11c-7183-4492-879b-ed0d2ca99c18-operator-scripts\") pod \"placement-db-create-wj5nl\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.291567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b28ac28e-619d-499c-bc7a-4baa5f06abe9-operator-scripts\") pod \"keystone-266f-account-create-update-dhqgj\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.291853 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c69x\" (UniqueName: \"kubernetes.io/projected/e53dc11c-7183-4492-879b-ed0d2ca99c18-kube-api-access-2c69x\") pod \"placement-db-create-wj5nl\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.291942 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf46g\" (UniqueName: \"kubernetes.io/projected/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-kube-api-access-nf46g\") pod \"placement-7d17-account-create-update-pjgn4\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.293007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b28ac28e-619d-499c-bc7a-4baa5f06abe9-operator-scripts\") pod \"keystone-266f-account-create-update-dhqgj\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.312708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzxm\" (UniqueName: \"kubernetes.io/projected/b28ac28e-619d-499c-bc7a-4baa5f06abe9-kube-api-access-5qzxm\") pod \"keystone-266f-account-create-update-dhqgj\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.389940 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.393407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53dc11c-7183-4492-879b-ed0d2ca99c18-operator-scripts\") pod \"placement-db-create-wj5nl\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.393491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c69x\" (UniqueName: \"kubernetes.io/projected/e53dc11c-7183-4492-879b-ed0d2ca99c18-kube-api-access-2c69x\") pod \"placement-db-create-wj5nl\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.393528 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf46g\" (UniqueName: \"kubernetes.io/projected/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-kube-api-access-nf46g\") pod \"placement-7d17-account-create-update-pjgn4\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.393587 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-operator-scripts\") pod \"placement-7d17-account-create-update-pjgn4\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.394277 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53dc11c-7183-4492-879b-ed0d2ca99c18-operator-scripts\") pod \"placement-db-create-wj5nl\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.394358 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-operator-scripts\") pod \"placement-7d17-account-create-update-pjgn4\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.413753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf46g\" (UniqueName: \"kubernetes.io/projected/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-kube-api-access-nf46g\") pod \"placement-7d17-account-create-update-pjgn4\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.422575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c69x\" (UniqueName: \"kubernetes.io/projected/e53dc11c-7183-4492-879b-ed0d2ca99c18-kube-api-access-2c69x\") pod \"placement-db-create-wj5nl\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.476419 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gtl95-config-lbfkw"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.486027 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gtl95-config-lbfkw"] Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.496217 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.589624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.738934 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5lwkq"] Dec 05 14:14:15 crc kubenswrapper[4858]: W1205 14:14:15.758280 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50f8533f_a5fe_4af0_98db_eb1cc52e7b0c.slice/crio-c6e4caeea250fca04b716af06871b7b0a99b4672ee8f63b7052f8b816fe2b6a7 WatchSource:0}: Error finding container c6e4caeea250fca04b716af06871b7b0a99b4672ee8f63b7052f8b816fe2b6a7: Status 404 returned error can't find the container with id c6e4caeea250fca04b716af06871b7b0a99b4672ee8f63b7052f8b816fe2b6a7 Dec 05 14:14:15 crc kubenswrapper[4858]: I1205 14:14:15.931982 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87aeea0e-c855-44db-b1da-ec60c0436310" path="/var/lib/kubelet/pods/87aeea0e-c855-44db-b1da-ec60c0436310/volumes" Dec 05 14:14:16 crc kubenswrapper[4858]: I1205 14:14:16.035020 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-266f-account-create-update-dhqgj"] Dec 05 14:14:16 crc kubenswrapper[4858]: I1205 14:14:16.123730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lwkq" event={"ID":"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c","Type":"ContainerStarted","Data":"c6e4caeea250fca04b716af06871b7b0a99b4672ee8f63b7052f8b816fe2b6a7"} Dec 05 14:14:16 crc kubenswrapper[4858]: I1205 14:14:16.124945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-266f-account-create-update-dhqgj" event={"ID":"b28ac28e-619d-499c-bc7a-4baa5f06abe9","Type":"ContainerStarted","Data":"ea6d61a24370b26c2c1ee23df0e729d22983303d7afe3fa9d93366d7490ddba1"} Dec 05 14:14:16 crc kubenswrapper[4858]: I1205 14:14:16.139836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"472064fae0079b1bc994525982e709b1ab2bd1dccaa9fb9d8e2cbb9dfa8c4695"} Dec 05 14:14:16 crc kubenswrapper[4858]: I1205 14:14:16.170802 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wj5nl"] Dec 05 14:14:16 crc kubenswrapper[4858]: W1205 14:14:16.227727 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode53dc11c_7183_4492_879b_ed0d2ca99c18.slice/crio-15421ed7baf4e8696af2c8b57366e660ca75ccdfce2f1bbeb207b06537e8c36e WatchSource:0}: Error finding container 15421ed7baf4e8696af2c8b57366e660ca75ccdfce2f1bbeb207b06537e8c36e: Status 404 returned error can't find the container with id 15421ed7baf4e8696af2c8b57366e660ca75ccdfce2f1bbeb207b06537e8c36e Dec 05 14:14:16 crc kubenswrapper[4858]: I1205 14:14:16.255850 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d17-account-create-update-pjgn4"] Dec 05 14:14:16 crc kubenswrapper[4858]: W1205 14:14:16.266639 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83a08cdd_eca5_4352_bdb6_fa27c4c2c317.slice/crio-c800562d4e6fa50ac915c5e251bdf4af214da08cdb81a834604982f84aa36ab0 WatchSource:0}: Error finding container c800562d4e6fa50ac915c5e251bdf4af214da08cdb81a834604982f84aa36ab0: Status 404 returned error can't find the container with id c800562d4e6fa50ac915c5e251bdf4af214da08cdb81a834604982f84aa36ab0 Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.149933 4858 generic.go:334] "Generic (PLEG): container finished" podID="50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" containerID="d911c6acd7b15e00234f14117a3a832b0fc5c1ccbd3e50360ad526fc0348a28e" exitCode=0 Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.150029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lwkq" event={"ID":"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c","Type":"ContainerDied","Data":"d911c6acd7b15e00234f14117a3a832b0fc5c1ccbd3e50360ad526fc0348a28e"} Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.153168 4858 generic.go:334] "Generic (PLEG): container finished" podID="83a08cdd-eca5-4352-bdb6-fa27c4c2c317" containerID="bfb73b62fd19ecd260ff9de5e818be4539924e0f8e1f692d0bd699c0c50f1b9f" exitCode=0 Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.153238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d17-account-create-update-pjgn4" event={"ID":"83a08cdd-eca5-4352-bdb6-fa27c4c2c317","Type":"ContainerDied","Data":"bfb73b62fd19ecd260ff9de5e818be4539924e0f8e1f692d0bd699c0c50f1b9f"} Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.153267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d17-account-create-update-pjgn4" event={"ID":"83a08cdd-eca5-4352-bdb6-fa27c4c2c317","Type":"ContainerStarted","Data":"c800562d4e6fa50ac915c5e251bdf4af214da08cdb81a834604982f84aa36ab0"} Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.160197 4858 generic.go:334] "Generic (PLEG): container finished" podID="b28ac28e-619d-499c-bc7a-4baa5f06abe9" containerID="be1fcccf413fbaec45e43f5648772f93e33c872411abed6b2257725101eeded0" exitCode=0 Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.160296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-266f-account-create-update-dhqgj" event={"ID":"b28ac28e-619d-499c-bc7a-4baa5f06abe9","Type":"ContainerDied","Data":"be1fcccf413fbaec45e43f5648772f93e33c872411abed6b2257725101eeded0"} Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.164084 4858 generic.go:334] "Generic (PLEG): container finished" podID="e53dc11c-7183-4492-879b-ed0d2ca99c18" containerID="49ba9d55564eb329918f6d4ea4f3da881a2e3aed307cff1cbe6890d75ba10461" exitCode=0 Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.164129 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wj5nl" event={"ID":"e53dc11c-7183-4492-879b-ed0d2ca99c18","Type":"ContainerDied","Data":"49ba9d55564eb329918f6d4ea4f3da881a2e3aed307cff1cbe6890d75ba10461"} Dec 05 14:14:17 crc kubenswrapper[4858]: I1205 14:14:17.164158 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wj5nl" event={"ID":"e53dc11c-7183-4492-879b-ed0d2ca99c18","Type":"ContainerStarted","Data":"15421ed7baf4e8696af2c8b57366e660ca75ccdfce2f1bbeb207b06537e8c36e"} Dec 05 14:14:21 crc kubenswrapper[4858]: I1205 14:14:21.755554 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:21 crc kubenswrapper[4858]: I1205 14:14:21.829597 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d44df849-7lnbz"] Dec 05 14:14:21 crc kubenswrapper[4858]: I1205 14:14:21.829862 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="dnsmasq-dns" containerID="cri-o://33e79e2565bc959ee0475babaa2920a19d72a32d53368ecaab4ae32b7261aec5" gracePeriod=10 Dec 05 14:14:22 crc kubenswrapper[4858]: I1205 14:14:22.165121 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 05 14:14:23 crc kubenswrapper[4858]: I1205 14:14:23.215384 4858 generic.go:334] "Generic (PLEG): container finished" podID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerID="33e79e2565bc959ee0475babaa2920a19d72a32d53368ecaab4ae32b7261aec5" exitCode=0 Dec 05 14:14:23 crc kubenswrapper[4858]: I1205 14:14:23.215432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" event={"ID":"25eaa80d-4f7a-46ac-8f1a-4f497013d82f","Type":"ContainerDied","Data":"33e79e2565bc959ee0475babaa2920a19d72a32d53368ecaab4ae32b7261aec5"} Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.257895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5lwkq" event={"ID":"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c","Type":"ContainerDied","Data":"c6e4caeea250fca04b716af06871b7b0a99b4672ee8f63b7052f8b816fe2b6a7"} Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.258367 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e4caeea250fca04b716af06871b7b0a99b4672ee8f63b7052f8b816fe2b6a7" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.259489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d17-account-create-update-pjgn4" event={"ID":"83a08cdd-eca5-4352-bdb6-fa27c4c2c317","Type":"ContainerDied","Data":"c800562d4e6fa50ac915c5e251bdf4af214da08cdb81a834604982f84aa36ab0"} Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.259517 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c800562d4e6fa50ac915c5e251bdf4af214da08cdb81a834604982f84aa36ab0" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.260989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-266f-account-create-update-dhqgj" event={"ID":"b28ac28e-619d-499c-bc7a-4baa5f06abe9","Type":"ContainerDied","Data":"ea6d61a24370b26c2c1ee23df0e729d22983303d7afe3fa9d93366d7490ddba1"} Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.261010 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea6d61a24370b26c2c1ee23df0e729d22983303d7afe3fa9d93366d7490ddba1" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.262412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wj5nl" event={"ID":"e53dc11c-7183-4492-879b-ed0d2ca99c18","Type":"ContainerDied","Data":"15421ed7baf4e8696af2c8b57366e660ca75ccdfce2f1bbeb207b06537e8c36e"} Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.262432 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15421ed7baf4e8696af2c8b57366e660ca75ccdfce2f1bbeb207b06537e8c36e" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.300108 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.328426 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.331111 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.364620 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.373518 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.414603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf46g\" (UniqueName: \"kubernetes.io/projected/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-kube-api-access-nf46g\") pod \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.414703 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-operator-scripts\") pod \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\" (UID: \"83a08cdd-eca5-4352-bdb6-fa27c4c2c317\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.416053 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83a08cdd-eca5-4352-bdb6-fa27c4c2c317" (UID: "83a08cdd-eca5-4352-bdb6-fa27c4c2c317"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.423071 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-kube-api-access-nf46g" (OuterVolumeSpecName: "kube-api-access-nf46g") pod "83a08cdd-eca5-4352-bdb6-fa27c4c2c317" (UID: "83a08cdd-eca5-4352-bdb6-fa27c4c2c317"). InnerVolumeSpecName "kube-api-access-nf46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516660 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9gr9\" (UniqueName: \"kubernetes.io/projected/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-kube-api-access-z9gr9\") pod \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516714 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-sb\") pod \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b28ac28e-619d-499c-bc7a-4baa5f06abe9-operator-scripts\") pod \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516862 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53dc11c-7183-4492-879b-ed0d2ca99c18-operator-scripts\") pod \"e53dc11c-7183-4492-879b-ed0d2ca99c18\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516888 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-config\") pod \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-nb\") pod \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516947 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c69x\" (UniqueName: \"kubernetes.io/projected/e53dc11c-7183-4492-879b-ed0d2ca99c18-kube-api-access-2c69x\") pod \"e53dc11c-7183-4492-879b-ed0d2ca99c18\" (UID: \"e53dc11c-7183-4492-879b-ed0d2ca99c18\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.516996 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-operator-scripts\") pod \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.517030 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-dns-svc\") pod \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\" (UID: \"25eaa80d-4f7a-46ac-8f1a-4f497013d82f\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.517079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4wgs\" (UniqueName: \"kubernetes.io/projected/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-kube-api-access-s4wgs\") pod \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\" (UID: \"50f8533f-a5fe-4af0-98db-eb1cc52e7b0c\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.517144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qzxm\" (UniqueName: \"kubernetes.io/projected/b28ac28e-619d-499c-bc7a-4baa5f06abe9-kube-api-access-5qzxm\") pod \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\" (UID: \"b28ac28e-619d-499c-bc7a-4baa5f06abe9\") " Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.517816 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.517879 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf46g\" (UniqueName: \"kubernetes.io/projected/83a08cdd-eca5-4352-bdb6-fa27c4c2c317-kube-api-access-nf46g\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.517942 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28ac28e-619d-499c-bc7a-4baa5f06abe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b28ac28e-619d-499c-bc7a-4baa5f06abe9" (UID: "b28ac28e-619d-499c-bc7a-4baa5f06abe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.518034 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" (UID: "50f8533f-a5fe-4af0-98db-eb1cc52e7b0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.518483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53dc11c-7183-4492-879b-ed0d2ca99c18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e53dc11c-7183-4492-879b-ed0d2ca99c18" (UID: "e53dc11c-7183-4492-879b-ed0d2ca99c18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.519413 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-kube-api-access-z9gr9" (OuterVolumeSpecName: "kube-api-access-z9gr9") pod "25eaa80d-4f7a-46ac-8f1a-4f497013d82f" (UID: "25eaa80d-4f7a-46ac-8f1a-4f497013d82f"). InnerVolumeSpecName "kube-api-access-z9gr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.522274 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28ac28e-619d-499c-bc7a-4baa5f06abe9-kube-api-access-5qzxm" (OuterVolumeSpecName: "kube-api-access-5qzxm") pod "b28ac28e-619d-499c-bc7a-4baa5f06abe9" (UID: "b28ac28e-619d-499c-bc7a-4baa5f06abe9"). InnerVolumeSpecName "kube-api-access-5qzxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.523014 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53dc11c-7183-4492-879b-ed0d2ca99c18-kube-api-access-2c69x" (OuterVolumeSpecName: "kube-api-access-2c69x") pod "e53dc11c-7183-4492-879b-ed0d2ca99c18" (UID: "e53dc11c-7183-4492-879b-ed0d2ca99c18"). InnerVolumeSpecName "kube-api-access-2c69x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.531810 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-kube-api-access-s4wgs" (OuterVolumeSpecName: "kube-api-access-s4wgs") pod "50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" (UID: "50f8533f-a5fe-4af0-98db-eb1cc52e7b0c"). InnerVolumeSpecName "kube-api-access-s4wgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.555393 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25eaa80d-4f7a-46ac-8f1a-4f497013d82f" (UID: "25eaa80d-4f7a-46ac-8f1a-4f497013d82f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.556254 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-config" (OuterVolumeSpecName: "config") pod "25eaa80d-4f7a-46ac-8f1a-4f497013d82f" (UID: "25eaa80d-4f7a-46ac-8f1a-4f497013d82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.562493 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25eaa80d-4f7a-46ac-8f1a-4f497013d82f" (UID: "25eaa80d-4f7a-46ac-8f1a-4f497013d82f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.563327 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25eaa80d-4f7a-46ac-8f1a-4f497013d82f" (UID: "25eaa80d-4f7a-46ac-8f1a-4f497013d82f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618856 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53dc11c-7183-4492-879b-ed0d2ca99c18-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618889 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618900 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618909 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c69x\" (UniqueName: \"kubernetes.io/projected/e53dc11c-7183-4492-879b-ed0d2ca99c18-kube-api-access-2c69x\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618921 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618928 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618936 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4wgs\" (UniqueName: \"kubernetes.io/projected/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c-kube-api-access-s4wgs\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618944 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qzxm\" (UniqueName: \"kubernetes.io/projected/b28ac28e-619d-499c-bc7a-4baa5f06abe9-kube-api-access-5qzxm\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618953 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9gr9\" (UniqueName: \"kubernetes.io/projected/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-kube-api-access-z9gr9\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618961 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25eaa80d-4f7a-46ac-8f1a-4f497013d82f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.618969 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b28ac28e-619d-499c-bc7a-4baa5f06abe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:25 crc kubenswrapper[4858]: I1205 14:14:25.947817 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gtl95" Dec 05 14:14:26 crc kubenswrapper[4858]: E1205 14:14:26.262346 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd99fd616_b195_4da7_b7ac_99bed8479e36.slice/crio-conmon-08ffecd9cc7a71d82d3e6577739e4a4afe4fee77374116ce3b8137d81627385f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd99fd616_b195_4da7_b7ac_99bed8479e36.slice/crio-08ffecd9cc7a71d82d3e6577739e4a4afe4fee77374116ce3b8137d81627385f.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.271480 4858 generic.go:334] "Generic (PLEG): container finished" podID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerID="08ffecd9cc7a71d82d3e6577739e4a4afe4fee77374116ce3b8137d81627385f" exitCode=0 Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.271542 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d99fd616-b195-4da7-b7ac-99bed8479e36","Type":"ContainerDied","Data":"08ffecd9cc7a71d82d3e6577739e4a4afe4fee77374116ce3b8137d81627385f"} Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.273433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl2mg" event={"ID":"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3","Type":"ContainerStarted","Data":"983b4227b1a3b4fa005273f71c6fad6c6a4ca2710332e045017479be3969dacc"} Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.277688 4858 generic.go:334] "Generic (PLEG): container finished" podID="96d65651-be4c-475d-b4dc-293f42b30e39" containerID="61be820f5d8a6be7f6e3cb724ea744ed88d63cbcb4c7adb651339c6612a8ed84" exitCode=0 Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.277769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96d65651-be4c-475d-b4dc-293f42b30e39","Type":"ContainerDied","Data":"61be820f5d8a6be7f6e3cb724ea744ed88d63cbcb4c7adb651339c6612a8ed84"} Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.290318 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d17-account-create-update-pjgn4" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.290385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" event={"ID":"25eaa80d-4f7a-46ac-8f1a-4f497013d82f","Type":"ContainerDied","Data":"ec38a8224d680b3e5a709a103466f54469f60ac5687e5b26c2bd44d96a27e5f3"} Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.290440 4858 scope.go:117] "RemoveContainer" containerID="33e79e2565bc959ee0475babaa2920a19d72a32d53368ecaab4ae32b7261aec5" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.290566 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78d44df849-7lnbz" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.291059 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5lwkq" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.291574 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wj5nl" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.292091 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-266f-account-create-update-dhqgj" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.346974 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cl2mg" podStartSLOduration=3.152113988 podStartE2EDuration="16.34695497s" podCreationTimestamp="2025-12-05 14:14:10 +0000 UTC" firstStartedPulling="2025-12-05 14:14:11.949954056 +0000 UTC m=+1060.497552195" lastFinishedPulling="2025-12-05 14:14:25.144795038 +0000 UTC m=+1073.692393177" observedRunningTime="2025-12-05 14:14:26.336454976 +0000 UTC m=+1074.884053115" watchObservedRunningTime="2025-12-05 14:14:26.34695497 +0000 UTC m=+1074.894553109" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.464016 4858 scope.go:117] "RemoveContainer" containerID="a8afbf9221979e13deb7a2c81c55edd5a1d5550f4fa8bd7832e731a644550976" Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.493691 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78d44df849-7lnbz"] Dec 05 14:14:26 crc kubenswrapper[4858]: I1205 14:14:26.499873 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78d44df849-7lnbz"] Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.299749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96d65651-be4c-475d-b4dc-293f42b30e39","Type":"ContainerStarted","Data":"b4f462209706ad933d22eba13ce317a196e3b5fa6757b7b067b49668ecaac734"} Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.301048 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.303339 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d99fd616-b195-4da7-b7ac-99bed8479e36","Type":"ContainerStarted","Data":"ba74b79e23b66a2518665a5b2a045ada00324e6a4f063010cb7b6f8eb0d76203"} Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.303775 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.335860 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.760185859 podStartE2EDuration="1m27.335815558s" podCreationTimestamp="2025-12-05 14:13:00 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.258890326 +0000 UTC m=+1009.806488465" lastFinishedPulling="2025-12-05 14:13:52.834520025 +0000 UTC m=+1041.382118164" observedRunningTime="2025-12-05 14:14:27.326278351 +0000 UTC m=+1075.873876510" watchObservedRunningTime="2025-12-05 14:14:27.335815558 +0000 UTC m=+1075.883413717" Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.369704 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=56.106108959 podStartE2EDuration="1m27.369678363s" podCreationTimestamp="2025-12-05 14:13:00 +0000 UTC" firstStartedPulling="2025-12-05 14:13:21.447469563 +0000 UTC m=+1009.995067702" lastFinishedPulling="2025-12-05 14:13:52.711038967 +0000 UTC m=+1041.258637106" observedRunningTime="2025-12-05 14:14:27.36364141 +0000 UTC m=+1075.911239549" watchObservedRunningTime="2025-12-05 14:14:27.369678363 +0000 UTC m=+1075.917276502" Dec 05 14:14:27 crc kubenswrapper[4858]: I1205 14:14:27.912248 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" path="/var/lib/kubelet/pods/25eaa80d-4f7a-46ac-8f1a-4f497013d82f/volumes" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.398971 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.427790 4858 generic.go:334] "Generic (PLEG): container finished" podID="48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" containerID="983b4227b1a3b4fa005273f71c6fad6c6a4ca2710332e045017479be3969dacc" exitCode=0 Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.427851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl2mg" event={"ID":"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3","Type":"ContainerDied","Data":"983b4227b1a3b4fa005273f71c6fad6c6a4ca2710332e045017479be3969dacc"} Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.822512 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.893999 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mghrf"] Dec 05 14:14:41 crc kubenswrapper[4858]: E1205 14:14:41.894321 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53dc11c-7183-4492-879b-ed0d2ca99c18" containerName="mariadb-database-create" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894338 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53dc11c-7183-4492-879b-ed0d2ca99c18" containerName="mariadb-database-create" Dec 05 14:14:41 crc kubenswrapper[4858]: E1205 14:14:41.894364 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" containerName="mariadb-database-create" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894371 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" containerName="mariadb-database-create" Dec 05 14:14:41 crc kubenswrapper[4858]: E1205 14:14:41.894383 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28ac28e-619d-499c-bc7a-4baa5f06abe9" containerName="mariadb-account-create-update" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894389 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28ac28e-619d-499c-bc7a-4baa5f06abe9" containerName="mariadb-account-create-update" Dec 05 14:14:41 crc kubenswrapper[4858]: E1205 14:14:41.894410 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="init" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="init" Dec 05 14:14:41 crc kubenswrapper[4858]: E1205 14:14:41.894425 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="dnsmasq-dns" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894430 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="dnsmasq-dns" Dec 05 14:14:41 crc kubenswrapper[4858]: E1205 14:14:41.894442 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a08cdd-eca5-4352-bdb6-fa27c4c2c317" containerName="mariadb-account-create-update" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894450 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a08cdd-eca5-4352-bdb6-fa27c4c2c317" containerName="mariadb-account-create-update" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894584 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" containerName="mariadb-database-create" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894605 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a08cdd-eca5-4352-bdb6-fa27c4c2c317" containerName="mariadb-account-create-update" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894619 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28ac28e-619d-499c-bc7a-4baa5f06abe9" containerName="mariadb-account-create-update" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894629 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53dc11c-7183-4492-879b-ed0d2ca99c18" containerName="mariadb-database-create" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.894636 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="25eaa80d-4f7a-46ac-8f1a-4f497013d82f" containerName="dnsmasq-dns" Dec 05 14:14:41 crc kubenswrapper[4858]: I1205 14:14:41.895161 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.034780 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ef5601-b86a-456e-bad7-e713c17fa711-operator-scripts\") pod \"cinder-db-create-mghrf\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.035326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k27\" (UniqueName: \"kubernetes.io/projected/85ef5601-b86a-456e-bad7-e713c17fa711-kube-api-access-q6k27\") pod \"cinder-db-create-mghrf\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.082925 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mghrf"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.099430 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-446f-account-create-update-tmxrf"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.100850 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.104481 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.118807 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-446f-account-create-update-tmxrf"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.144942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ef5601-b86a-456e-bad7-e713c17fa711-operator-scripts\") pod \"cinder-db-create-mghrf\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.145072 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6k27\" (UniqueName: \"kubernetes.io/projected/85ef5601-b86a-456e-bad7-e713c17fa711-kube-api-access-q6k27\") pod \"cinder-db-create-mghrf\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.146184 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ef5601-b86a-456e-bad7-e713c17fa711-operator-scripts\") pod \"cinder-db-create-mghrf\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.177175 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-qh9gh"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.178280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.181925 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-dfcd-account-create-update-5t722"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.183129 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.190930 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.209547 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qh9gh"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.234588 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dfcd-account-create-update-5t722"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.254843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6k27\" (UniqueName: \"kubernetes.io/projected/85ef5601-b86a-456e-bad7-e713c17fa711-kube-api-access-q6k27\") pod \"cinder-db-create-mghrf\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.255412 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbb7f6b-3583-45c9-bac1-08b968e84700-operator-scripts\") pod \"cinder-dfcd-account-create-update-5t722\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.255452 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmgdw\" (UniqueName: \"kubernetes.io/projected/6fbb7f6b-3583-45c9-bac1-08b968e84700-kube-api-access-qmgdw\") pod \"cinder-dfcd-account-create-update-5t722\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.255496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768a8643-81f7-42cf-a720-3e5daed8bba6-operator-scripts\") pod \"barbican-db-create-qh9gh\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.255616 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j49xn\" (UniqueName: \"kubernetes.io/projected/768a8643-81f7-42cf-a720-3e5daed8bba6-kube-api-access-j49xn\") pod \"barbican-db-create-qh9gh\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.255647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b28c893-a052-4412-8f85-112a1cd06861-operator-scripts\") pod \"barbican-446f-account-create-update-tmxrf\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.255668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdmz\" (UniqueName: \"kubernetes.io/projected/5b28c893-a052-4412-8f85-112a1cd06861-kube-api-access-xcdmz\") pod \"barbican-446f-account-create-update-tmxrf\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.356719 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j49xn\" (UniqueName: \"kubernetes.io/projected/768a8643-81f7-42cf-a720-3e5daed8bba6-kube-api-access-j49xn\") pod \"barbican-db-create-qh9gh\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.357049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b28c893-a052-4412-8f85-112a1cd06861-operator-scripts\") pod \"barbican-446f-account-create-update-tmxrf\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.357079 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcdmz\" (UniqueName: \"kubernetes.io/projected/5b28c893-a052-4412-8f85-112a1cd06861-kube-api-access-xcdmz\") pod \"barbican-446f-account-create-update-tmxrf\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.357122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbb7f6b-3583-45c9-bac1-08b968e84700-operator-scripts\") pod \"cinder-dfcd-account-create-update-5t722\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.357149 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmgdw\" (UniqueName: \"kubernetes.io/projected/6fbb7f6b-3583-45c9-bac1-08b968e84700-kube-api-access-qmgdw\") pod \"cinder-dfcd-account-create-update-5t722\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.357186 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768a8643-81f7-42cf-a720-3e5daed8bba6-operator-scripts\") pod \"barbican-db-create-qh9gh\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.357892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b28c893-a052-4412-8f85-112a1cd06861-operator-scripts\") pod \"barbican-446f-account-create-update-tmxrf\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.358084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbb7f6b-3583-45c9-bac1-08b968e84700-operator-scripts\") pod \"cinder-dfcd-account-create-update-5t722\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.358125 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768a8643-81f7-42cf-a720-3e5daed8bba6-operator-scripts\") pod \"barbican-db-create-qh9gh\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.389584 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmgdw\" (UniqueName: \"kubernetes.io/projected/6fbb7f6b-3583-45c9-bac1-08b968e84700-kube-api-access-qmgdw\") pod \"cinder-dfcd-account-create-update-5t722\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.390521 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j49xn\" (UniqueName: \"kubernetes.io/projected/768a8643-81f7-42cf-a720-3e5daed8bba6-kube-api-access-j49xn\") pod \"barbican-db-create-qh9gh\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.411436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcdmz\" (UniqueName: \"kubernetes.io/projected/5b28c893-a052-4412-8f85-112a1cd06861-kube-api-access-xcdmz\") pod \"barbican-446f-account-create-update-tmxrf\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.416139 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.497168 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.510407 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.525497 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.557564 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-t4rpv"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.562773 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.579690 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-t4rpv"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.638885 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6n4wj"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.642970 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.652555 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6n4wj"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.660858 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.661411 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.662355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479f0846-6832-4c62-9791-cde613d23000-operator-scripts\") pod \"heat-db-create-t4rpv\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.662405 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcsl\" (UniqueName: \"kubernetes.io/projected/479f0846-6832-4c62-9791-cde613d23000-kube-api-access-jbcsl\") pod \"heat-db-create-t4rpv\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.664756 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbtl5" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.664852 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.763866 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrj7\" (UniqueName: \"kubernetes.io/projected/5f5aace7-7479-454e-b9c3-c83f492b0786-kube-api-access-dmrj7\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.764178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-config-data\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.764208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-combined-ca-bundle\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.764254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479f0846-6832-4c62-9791-cde613d23000-operator-scripts\") pod \"heat-db-create-t4rpv\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.764302 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcsl\" (UniqueName: \"kubernetes.io/projected/479f0846-6832-4c62-9791-cde613d23000-kube-api-access-jbcsl\") pod \"heat-db-create-t4rpv\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.767073 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479f0846-6832-4c62-9791-cde613d23000-operator-scripts\") pod \"heat-db-create-t4rpv\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.783743 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-84sxb"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.784758 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.810744 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-9d42-account-create-update-272c8"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.811982 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.825355 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.837993 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-84sxb"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.848364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcsl\" (UniqueName: \"kubernetes.io/projected/479f0846-6832-4c62-9791-cde613d23000-kube-api-access-jbcsl\") pod \"heat-db-create-t4rpv\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.867447 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-combined-ca-bundle\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.867536 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e3b03e-1157-4dfe-b594-57b16e70243a-operator-scripts\") pod \"neutron-db-create-84sxb\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.867569 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg4zs\" (UniqueName: \"kubernetes.io/projected/62e3b03e-1157-4dfe-b594-57b16e70243a-kube-api-access-hg4zs\") pod \"neutron-db-create-84sxb\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.867611 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrj7\" (UniqueName: \"kubernetes.io/projected/5f5aace7-7479-454e-b9c3-c83f492b0786-kube-api-access-dmrj7\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.867677 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-config-data\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.876223 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9d42-account-create-update-272c8"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.881457 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-config-data\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.887408 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-combined-ca-bundle\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.889757 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.898554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrj7\" (UniqueName: \"kubernetes.io/projected/5f5aace7-7479-454e-b9c3-c83f492b0786-kube-api-access-dmrj7\") pod \"keystone-db-sync-6n4wj\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.969044 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c598-account-create-update-7mdk8"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.970005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg4zs\" (UniqueName: \"kubernetes.io/projected/62e3b03e-1157-4dfe-b594-57b16e70243a-kube-api-access-hg4zs\") pod \"neutron-db-create-84sxb\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.970388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d277b\" (UniqueName: \"kubernetes.io/projected/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-kube-api-access-d277b\") pod \"heat-9d42-account-create-update-272c8\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.970697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-operator-scripts\") pod \"heat-9d42-account-create-update-272c8\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.971348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e3b03e-1157-4dfe-b594-57b16e70243a-operator-scripts\") pod \"neutron-db-create-84sxb\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.972170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e3b03e-1157-4dfe-b594-57b16e70243a-operator-scripts\") pod \"neutron-db-create-84sxb\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.975287 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.977861 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c598-account-create-update-7mdk8"] Dec 05 14:14:42 crc kubenswrapper[4858]: I1205 14:14:42.983490 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.006067 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.007249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg4zs\" (UniqueName: \"kubernetes.io/projected/62e3b03e-1157-4dfe-b594-57b16e70243a-kube-api-access-hg4zs\") pod \"neutron-db-create-84sxb\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.088250 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d0ff391-7201-49f7-be8b-21d096449ae7-operator-scripts\") pod \"neutron-c598-account-create-update-7mdk8\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.089214 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-operator-scripts\") pod \"heat-9d42-account-create-update-272c8\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.089387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d277b\" (UniqueName: \"kubernetes.io/projected/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-kube-api-access-d277b\") pod \"heat-9d42-account-create-update-272c8\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.089471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm98v\" (UniqueName: \"kubernetes.io/projected/7d0ff391-7201-49f7-be8b-21d096449ae7-kube-api-access-bm98v\") pod \"neutron-c598-account-create-update-7mdk8\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.090346 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-operator-scripts\") pod \"heat-9d42-account-create-update-272c8\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.118783 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d277b\" (UniqueName: \"kubernetes.io/projected/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-kube-api-access-d277b\") pod \"heat-9d42-account-create-update-272c8\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.134103 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.158109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.193941 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm98v\" (UniqueName: \"kubernetes.io/projected/7d0ff391-7201-49f7-be8b-21d096449ae7-kube-api-access-bm98v\") pod \"neutron-c598-account-create-update-7mdk8\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.194068 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d0ff391-7201-49f7-be8b-21d096449ae7-operator-scripts\") pod \"neutron-c598-account-create-update-7mdk8\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.194874 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d0ff391-7201-49f7-be8b-21d096449ae7-operator-scripts\") pod \"neutron-c598-account-create-update-7mdk8\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.222146 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm98v\" (UniqueName: \"kubernetes.io/projected/7d0ff391-7201-49f7-be8b-21d096449ae7-kube-api-access-bm98v\") pod \"neutron-c598-account-create-update-7mdk8\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.324504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.356351 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-446f-account-create-update-tmxrf"] Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.476236 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.477402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cl2mg" event={"ID":"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3","Type":"ContainerDied","Data":"24cac268ecbaf045c5409fe966887ef070f70d44da17562e406d6aef6fef5bd0"} Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.477465 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24cac268ecbaf045c5409fe966887ef070f70d44da17562e406d6aef6fef5bd0" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.480041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-446f-account-create-update-tmxrf" event={"ID":"5b28c893-a052-4412-8f85-112a1cd06861","Type":"ContainerStarted","Data":"4afde39ca630f90a70033e2f569400275a27ceca8b69c7fa5c4080d18b7a1590"} Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.501042 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mghrf"] Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.565887 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-qh9gh"] Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.571426 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-dfcd-account-create-update-5t722"] Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.615011 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-combined-ca-bundle\") pod \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.615385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-config-data\") pod \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.615608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctgk\" (UniqueName: \"kubernetes.io/projected/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-kube-api-access-5ctgk\") pod \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.615646 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-db-sync-config-data\") pod \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\" (UID: \"48dfcb42-ecb6-463d-9e5f-ddbf758dfee3\") " Dec 05 14:14:43 crc kubenswrapper[4858]: W1205 14:14:43.622051 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768a8643_81f7_42cf_a720_3e5daed8bba6.slice/crio-5d7db4231b1c4e2410bb6d3ee565de8fc616e38b29588b10828113bade355f04 WatchSource:0}: Error finding container 5d7db4231b1c4e2410bb6d3ee565de8fc616e38b29588b10828113bade355f04: Status 404 returned error can't find the container with id 5d7db4231b1c4e2410bb6d3ee565de8fc616e38b29588b10828113bade355f04 Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.625262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" (UID: "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.641571 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-kube-api-access-5ctgk" (OuterVolumeSpecName: "kube-api-access-5ctgk") pod "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" (UID: "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3"). InnerVolumeSpecName "kube-api-access-5ctgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.717694 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ctgk\" (UniqueName: \"kubernetes.io/projected/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-kube-api-access-5ctgk\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.717719 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.724471 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" (UID: "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.765856 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-config-data" (OuterVolumeSpecName: "config-data") pod "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" (UID: "48dfcb42-ecb6-463d-9e5f-ddbf758dfee3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.807598 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-t4rpv"] Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.818819 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.818878 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.955450 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-9d42-account-create-update-272c8"] Dec 05 14:14:43 crc kubenswrapper[4858]: I1205 14:14:43.978548 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6n4wj"] Dec 05 14:14:43 crc kubenswrapper[4858]: W1205 14:14:43.978779 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f5aace7_7479_454e_b9c3_c83f492b0786.slice/crio-914734c23c01a67feec2a33540a37747eaa54e2b3aad717ff68ab084aa5dfa62 WatchSource:0}: Error finding container 914734c23c01a67feec2a33540a37747eaa54e2b3aad717ff68ab084aa5dfa62: Status 404 returned error can't find the container with id 914734c23c01a67feec2a33540a37747eaa54e2b3aad717ff68ab084aa5dfa62 Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.042284 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-84sxb"] Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.133337 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c598-account-create-update-7mdk8"] Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.494816 4858 generic.go:334] "Generic (PLEG): container finished" podID="479f0846-6832-4c62-9791-cde613d23000" containerID="4e96b9e2dfe266ec6a59dc053f704e0d89c44dcf982d6f93f4d2d06706c22626" exitCode=0 Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.494885 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t4rpv" event={"ID":"479f0846-6832-4c62-9791-cde613d23000","Type":"ContainerDied","Data":"4e96b9e2dfe266ec6a59dc053f704e0d89c44dcf982d6f93f4d2d06706c22626"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.494910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t4rpv" event={"ID":"479f0846-6832-4c62-9791-cde613d23000","Type":"ContainerStarted","Data":"a2ae39040a1052c78551468e281b0d66e48ffff527c89b497b55f45c285192ca"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.507082 4858 generic.go:334] "Generic (PLEG): container finished" podID="85ef5601-b86a-456e-bad7-e713c17fa711" containerID="c9442e63f0c5957159579b6d4fffcb73ffdc6327bf09c7cc0559031c8d017720" exitCode=0 Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.507289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mghrf" event={"ID":"85ef5601-b86a-456e-bad7-e713c17fa711","Type":"ContainerDied","Data":"c9442e63f0c5957159579b6d4fffcb73ffdc6327bf09c7cc0559031c8d017720"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.507321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mghrf" event={"ID":"85ef5601-b86a-456e-bad7-e713c17fa711","Type":"ContainerStarted","Data":"f0922f0f5e0ea56d44cde367be45dc5e78fae53b2fe1f3b8051dcbb11951cb03"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.509802 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c598-account-create-update-7mdk8" event={"ID":"7d0ff391-7201-49f7-be8b-21d096449ae7","Type":"ContainerStarted","Data":"f2d82c0a28dec3e8f4aba13fd38d2491d5d0f2be87c4c5e245d7c8225a6ba54d"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.534099 4858 generic.go:334] "Generic (PLEG): container finished" podID="6fbb7f6b-3583-45c9-bac1-08b968e84700" containerID="29150479d981ad9c9fe934fb2564200e1c4d615d1cbe5cba0b9a795894015b36" exitCode=0 Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.534399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfcd-account-create-update-5t722" event={"ID":"6fbb7f6b-3583-45c9-bac1-08b968e84700","Type":"ContainerDied","Data":"29150479d981ad9c9fe934fb2564200e1c4d615d1cbe5cba0b9a795894015b36"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.534482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfcd-account-create-update-5t722" event={"ID":"6fbb7f6b-3583-45c9-bac1-08b968e84700","Type":"ContainerStarted","Data":"8a76554b28b0b8bee06f4728b7ee003d3c1f51131955c141f547d66f5a63e30d"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.561566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d42-account-create-update-272c8" event={"ID":"be1c7cb2-81f8-483a-8abe-2c8f3968ad77","Type":"ContainerStarted","Data":"97326f483f5c296056b2089e182b32ff63bd7f519d9cf5bd90353684880af84d"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.561897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d42-account-create-update-272c8" event={"ID":"be1c7cb2-81f8-483a-8abe-2c8f3968ad77","Type":"ContainerStarted","Data":"861fe720cd4d95ea13323329a373143a807786ee47de32d0e19a57f6d9785b62"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.568231 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6n4wj" event={"ID":"5f5aace7-7479-454e-b9c3-c83f492b0786","Type":"ContainerStarted","Data":"914734c23c01a67feec2a33540a37747eaa54e2b3aad717ff68ab084aa5dfa62"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.574850 4858 generic.go:334] "Generic (PLEG): container finished" podID="768a8643-81f7-42cf-a720-3e5daed8bba6" containerID="175cf61c7405767871eead7eb4f9c559d8721695dc1e54e9a9abc8d198c95d68" exitCode=0 Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.575089 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qh9gh" event={"ID":"768a8643-81f7-42cf-a720-3e5daed8bba6","Type":"ContainerDied","Data":"175cf61c7405767871eead7eb4f9c559d8721695dc1e54e9a9abc8d198c95d68"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.575719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qh9gh" event={"ID":"768a8643-81f7-42cf-a720-3e5daed8bba6","Type":"ContainerStarted","Data":"5d7db4231b1c4e2410bb6d3ee565de8fc616e38b29588b10828113bade355f04"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.576652 4858 generic.go:334] "Generic (PLEG): container finished" podID="5b28c893-a052-4412-8f85-112a1cd06861" containerID="d8b8f7b4376a7cca3edb0cfa4c554f05adf00af8e3560915d85f6f206307b004" exitCode=0 Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.576836 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-446f-account-create-update-tmxrf" event={"ID":"5b28c893-a052-4412-8f85-112a1cd06861","Type":"ContainerDied","Data":"d8b8f7b4376a7cca3edb0cfa4c554f05adf00af8e3560915d85f6f206307b004"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.578078 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cl2mg" Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.579208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-84sxb" event={"ID":"62e3b03e-1157-4dfe-b594-57b16e70243a","Type":"ContainerStarted","Data":"eab931537e77eef25f737906aa0df423f1c7640efb1c1bebc51e9f3434001c75"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.586073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-84sxb" event={"ID":"62e3b03e-1157-4dfe-b594-57b16e70243a","Type":"ContainerStarted","Data":"f643e7bc328cd8d16b40dd700be6d8448d3d7b7db7554421cf7ad1acf93a1a51"} Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.711174 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-9d42-account-create-update-272c8" podStartSLOduration=2.711145206 podStartE2EDuration="2.711145206s" podCreationTimestamp="2025-12-05 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:14:44.664436373 +0000 UTC m=+1093.212034542" watchObservedRunningTime="2025-12-05 14:14:44.711145206 +0000 UTC m=+1093.258743345" Dec 05 14:14:44 crc kubenswrapper[4858]: I1205 14:14:44.724708 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-84sxb" podStartSLOduration=2.724687721 podStartE2EDuration="2.724687721s" podCreationTimestamp="2025-12-05 14:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:14:44.693348755 +0000 UTC m=+1093.240946894" watchObservedRunningTime="2025-12-05 14:14:44.724687721 +0000 UTC m=+1093.272285860" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.016241 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c64677f45-sx5vn"] Dec 05 14:14:45 crc kubenswrapper[4858]: E1205 14:14:45.016909 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" containerName="glance-db-sync" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.016981 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" containerName="glance-db-sync" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.017247 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" containerName="glance-db-sync" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.018225 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.040199 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c64677f45-sx5vn"] Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.158283 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-svc\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.158375 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-swift-storage-0\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.158422 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-config\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.158466 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-nb\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.158543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-sb\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.158570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hfg\" (UniqueName: \"kubernetes.io/projected/322a7082-a7b1-4eed-a9b7-6ecad109cb76-kube-api-access-b9hfg\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.259769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-swift-storage-0\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.259845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-config\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.259887 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-nb\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.259942 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-sb\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.259959 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9hfg\" (UniqueName: \"kubernetes.io/projected/322a7082-a7b1-4eed-a9b7-6ecad109cb76-kube-api-access-b9hfg\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.259978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-svc\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.260915 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-svc\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.260959 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-sb\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.261360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-swift-storage-0\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.261579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-config\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.261774 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-nb\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.302139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9hfg\" (UniqueName: \"kubernetes.io/projected/322a7082-a7b1-4eed-a9b7-6ecad109cb76-kube-api-access-b9hfg\") pod \"dnsmasq-dns-5c64677f45-sx5vn\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.344989 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.588296 4858 generic.go:334] "Generic (PLEG): container finished" podID="7d0ff391-7201-49f7-be8b-21d096449ae7" containerID="155a2abcbc9e9cf802ea721aed75d24a1b579285aa9e9675635c2c00f6d2dc28" exitCode=0 Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.589387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c598-account-create-update-7mdk8" event={"ID":"7d0ff391-7201-49f7-be8b-21d096449ae7","Type":"ContainerDied","Data":"155a2abcbc9e9cf802ea721aed75d24a1b579285aa9e9675635c2c00f6d2dc28"} Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.592098 4858 generic.go:334] "Generic (PLEG): container finished" podID="be1c7cb2-81f8-483a-8abe-2c8f3968ad77" containerID="97326f483f5c296056b2089e182b32ff63bd7f519d9cf5bd90353684880af84d" exitCode=0 Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.592182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d42-account-create-update-272c8" event={"ID":"be1c7cb2-81f8-483a-8abe-2c8f3968ad77","Type":"ContainerDied","Data":"97326f483f5c296056b2089e182b32ff63bd7f519d9cf5bd90353684880af84d"} Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.598398 4858 generic.go:334] "Generic (PLEG): container finished" podID="62e3b03e-1157-4dfe-b594-57b16e70243a" containerID="eab931537e77eef25f737906aa0df423f1c7640efb1c1bebc51e9f3434001c75" exitCode=0 Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.598603 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-84sxb" event={"ID":"62e3b03e-1157-4dfe-b594-57b16e70243a","Type":"ContainerDied","Data":"eab931537e77eef25f737906aa0df423f1c7640efb1c1bebc51e9f3434001c75"} Dec 05 14:14:45 crc kubenswrapper[4858]: I1205 14:14:45.824169 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c64677f45-sx5vn"] Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.244123 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.380555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768a8643-81f7-42cf-a720-3e5daed8bba6-operator-scripts\") pod \"768a8643-81f7-42cf-a720-3e5daed8bba6\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.381307 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j49xn\" (UniqueName: \"kubernetes.io/projected/768a8643-81f7-42cf-a720-3e5daed8bba6-kube-api-access-j49xn\") pod \"768a8643-81f7-42cf-a720-3e5daed8bba6\" (UID: \"768a8643-81f7-42cf-a720-3e5daed8bba6\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.381935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768a8643-81f7-42cf-a720-3e5daed8bba6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768a8643-81f7-42cf-a720-3e5daed8bba6" (UID: "768a8643-81f7-42cf-a720-3e5daed8bba6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.384873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768a8643-81f7-42cf-a720-3e5daed8bba6-kube-api-access-j49xn" (OuterVolumeSpecName: "kube-api-access-j49xn") pod "768a8643-81f7-42cf-a720-3e5daed8bba6" (UID: "768a8643-81f7-42cf-a720-3e5daed8bba6"). InnerVolumeSpecName "kube-api-access-j49xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.483330 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j49xn\" (UniqueName: \"kubernetes.io/projected/768a8643-81f7-42cf-a720-3e5daed8bba6-kube-api-access-j49xn\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.483360 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768a8643-81f7-42cf-a720-3e5daed8bba6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.529246 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.541622 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.561026 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.563960 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.586691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479f0846-6832-4c62-9791-cde613d23000-operator-scripts\") pod \"479f0846-6832-4c62-9791-cde613d23000\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.586764 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmgdw\" (UniqueName: \"kubernetes.io/projected/6fbb7f6b-3583-45c9-bac1-08b968e84700-kube-api-access-qmgdw\") pod \"6fbb7f6b-3583-45c9-bac1-08b968e84700\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.586874 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbb7f6b-3583-45c9-bac1-08b968e84700-operator-scripts\") pod \"6fbb7f6b-3583-45c9-bac1-08b968e84700\" (UID: \"6fbb7f6b-3583-45c9-bac1-08b968e84700\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.587010 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcsl\" (UniqueName: \"kubernetes.io/projected/479f0846-6832-4c62-9791-cde613d23000-kube-api-access-jbcsl\") pod \"479f0846-6832-4c62-9791-cde613d23000\" (UID: \"479f0846-6832-4c62-9791-cde613d23000\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.590611 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbb7f6b-3583-45c9-bac1-08b968e84700-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fbb7f6b-3583-45c9-bac1-08b968e84700" (UID: "6fbb7f6b-3583-45c9-bac1-08b968e84700"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.590912 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/479f0846-6832-4c62-9791-cde613d23000-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "479f0846-6832-4c62-9791-cde613d23000" (UID: "479f0846-6832-4c62-9791-cde613d23000"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.591335 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479f0846-6832-4c62-9791-cde613d23000-kube-api-access-jbcsl" (OuterVolumeSpecName: "kube-api-access-jbcsl") pod "479f0846-6832-4c62-9791-cde613d23000" (UID: "479f0846-6832-4c62-9791-cde613d23000"). InnerVolumeSpecName "kube-api-access-jbcsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.595839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbb7f6b-3583-45c9-bac1-08b968e84700-kube-api-access-qmgdw" (OuterVolumeSpecName: "kube-api-access-qmgdw") pod "6fbb7f6b-3583-45c9-bac1-08b968e84700" (UID: "6fbb7f6b-3583-45c9-bac1-08b968e84700"). InnerVolumeSpecName "kube-api-access-qmgdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.633563 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-t4rpv" event={"ID":"479f0846-6832-4c62-9791-cde613d23000","Type":"ContainerDied","Data":"a2ae39040a1052c78551468e281b0d66e48ffff527c89b497b55f45c285192ca"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.633644 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ae39040a1052c78551468e281b0d66e48ffff527c89b497b55f45c285192ca" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.633724 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-t4rpv" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.639042 4858 generic.go:334] "Generic (PLEG): container finished" podID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerID="2b9287f6435080d6b22e4707bcc15ab7726e55b6988908610fb4b91024c83666" exitCode=0 Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.639184 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" event={"ID":"322a7082-a7b1-4eed-a9b7-6ecad109cb76","Type":"ContainerDied","Data":"2b9287f6435080d6b22e4707bcc15ab7726e55b6988908610fb4b91024c83666"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.639230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" event={"ID":"322a7082-a7b1-4eed-a9b7-6ecad109cb76","Type":"ContainerStarted","Data":"fd3c58f16b46c393aaab478c47dcc69136a0822537daf9c0543ed9ee7a726105"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.643778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mghrf" event={"ID":"85ef5601-b86a-456e-bad7-e713c17fa711","Type":"ContainerDied","Data":"f0922f0f5e0ea56d44cde367be45dc5e78fae53b2fe1f3b8051dcbb11951cb03"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.643804 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0922f0f5e0ea56d44cde367be45dc5e78fae53b2fe1f3b8051dcbb11951cb03" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.643811 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mghrf" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.646157 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-dfcd-account-create-update-5t722" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.646757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-dfcd-account-create-update-5t722" event={"ID":"6fbb7f6b-3583-45c9-bac1-08b968e84700","Type":"ContainerDied","Data":"8a76554b28b0b8bee06f4728b7ee003d3c1f51131955c141f547d66f5a63e30d"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.646779 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a76554b28b0b8bee06f4728b7ee003d3c1f51131955c141f547d66f5a63e30d" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.672289 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-qh9gh" event={"ID":"768a8643-81f7-42cf-a720-3e5daed8bba6","Type":"ContainerDied","Data":"5d7db4231b1c4e2410bb6d3ee565de8fc616e38b29588b10828113bade355f04"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.672337 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d7db4231b1c4e2410bb6d3ee565de8fc616e38b29588b10828113bade355f04" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.672436 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-qh9gh" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.675630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-446f-account-create-update-tmxrf" event={"ID":"5b28c893-a052-4412-8f85-112a1cd06861","Type":"ContainerDied","Data":"4afde39ca630f90a70033e2f569400275a27ceca8b69c7fa5c4080d18b7a1590"} Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.675670 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afde39ca630f90a70033e2f569400275a27ceca8b69c7fa5c4080d18b7a1590" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.675737 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-446f-account-create-update-tmxrf" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.690534 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b28c893-a052-4412-8f85-112a1cd06861-operator-scripts\") pod \"5b28c893-a052-4412-8f85-112a1cd06861\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.690793 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ef5601-b86a-456e-bad7-e713c17fa711-operator-scripts\") pod \"85ef5601-b86a-456e-bad7-e713c17fa711\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.691217 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcdmz\" (UniqueName: \"kubernetes.io/projected/5b28c893-a052-4412-8f85-112a1cd06861-kube-api-access-xcdmz\") pod \"5b28c893-a052-4412-8f85-112a1cd06861\" (UID: \"5b28c893-a052-4412-8f85-112a1cd06861\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.691364 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6k27\" (UniqueName: \"kubernetes.io/projected/85ef5601-b86a-456e-bad7-e713c17fa711-kube-api-access-q6k27\") pod \"85ef5601-b86a-456e-bad7-e713c17fa711\" (UID: \"85ef5601-b86a-456e-bad7-e713c17fa711\") " Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.695309 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcsl\" (UniqueName: \"kubernetes.io/projected/479f0846-6832-4c62-9791-cde613d23000-kube-api-access-jbcsl\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.695482 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/479f0846-6832-4c62-9791-cde613d23000-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.695593 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmgdw\" (UniqueName: \"kubernetes.io/projected/6fbb7f6b-3583-45c9-bac1-08b968e84700-kube-api-access-qmgdw\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.695689 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fbb7f6b-3583-45c9-bac1-08b968e84700-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.691940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b28c893-a052-4412-8f85-112a1cd06861-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b28c893-a052-4412-8f85-112a1cd06861" (UID: "5b28c893-a052-4412-8f85-112a1cd06861"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.692492 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ef5601-b86a-456e-bad7-e713c17fa711-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85ef5601-b86a-456e-bad7-e713c17fa711" (UID: "85ef5601-b86a-456e-bad7-e713c17fa711"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.696870 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b28c893-a052-4412-8f85-112a1cd06861-kube-api-access-xcdmz" (OuterVolumeSpecName: "kube-api-access-xcdmz") pod "5b28c893-a052-4412-8f85-112a1cd06861" (UID: "5b28c893-a052-4412-8f85-112a1cd06861"). InnerVolumeSpecName "kube-api-access-xcdmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.702265 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ef5601-b86a-456e-bad7-e713c17fa711-kube-api-access-q6k27" (OuterVolumeSpecName: "kube-api-access-q6k27") pod "85ef5601-b86a-456e-bad7-e713c17fa711" (UID: "85ef5601-b86a-456e-bad7-e713c17fa711"). InnerVolumeSpecName "kube-api-access-q6k27". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.800897 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6k27\" (UniqueName: \"kubernetes.io/projected/85ef5601-b86a-456e-bad7-e713c17fa711-kube-api-access-q6k27\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.800923 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b28c893-a052-4412-8f85-112a1cd06861-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.800932 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85ef5601-b86a-456e-bad7-e713c17fa711-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.800942 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcdmz\" (UniqueName: \"kubernetes.io/projected/5b28c893-a052-4412-8f85-112a1cd06861-kube-api-access-xcdmz\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:46 crc kubenswrapper[4858]: I1205 14:14:46.950888 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.003511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm98v\" (UniqueName: \"kubernetes.io/projected/7d0ff391-7201-49f7-be8b-21d096449ae7-kube-api-access-bm98v\") pod \"7d0ff391-7201-49f7-be8b-21d096449ae7\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.004367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d0ff391-7201-49f7-be8b-21d096449ae7-operator-scripts\") pod \"7d0ff391-7201-49f7-be8b-21d096449ae7\" (UID: \"7d0ff391-7201-49f7-be8b-21d096449ae7\") " Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.011071 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d0ff391-7201-49f7-be8b-21d096449ae7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d0ff391-7201-49f7-be8b-21d096449ae7" (UID: "7d0ff391-7201-49f7-be8b-21d096449ae7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.033990 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0ff391-7201-49f7-be8b-21d096449ae7-kube-api-access-bm98v" (OuterVolumeSpecName: "kube-api-access-bm98v") pod "7d0ff391-7201-49f7-be8b-21d096449ae7" (UID: "7d0ff391-7201-49f7-be8b-21d096449ae7"). InnerVolumeSpecName "kube-api-access-bm98v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:47 crc kubenswrapper[4858]: E1205 14:14:47.039553 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768a8643_81f7_42cf_a720_3e5daed8bba6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod479f0846_6832_4c62_9791_cde613d23000.slice/crio-a2ae39040a1052c78551468e281b0d66e48ffff527c89b497b55f45c285192ca\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fbb7f6b_3583_45c9_bac1_08b968e84700.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod479f0846_6832_4c62_9791_cde613d23000.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fbb7f6b_3583_45c9_bac1_08b968e84700.slice/crio-8a76554b28b0b8bee06f4728b7ee003d3c1f51131955c141f547d66f5a63e30d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b28c893_a052_4412_8f85_112a1cd06861.slice/crio-4afde39ca630f90a70033e2f569400275a27ceca8b69c7fa5c4080d18b7a1590\": RecentStats: unable to find data in memory cache]" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.081641 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.086480 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.106787 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d0ff391-7201-49f7-be8b-21d096449ae7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.106836 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm98v\" (UniqueName: \"kubernetes.io/projected/7d0ff391-7201-49f7-be8b-21d096449ae7-kube-api-access-bm98v\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.207739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-operator-scripts\") pod \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.207901 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg4zs\" (UniqueName: \"kubernetes.io/projected/62e3b03e-1157-4dfe-b594-57b16e70243a-kube-api-access-hg4zs\") pod \"62e3b03e-1157-4dfe-b594-57b16e70243a\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.207938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d277b\" (UniqueName: \"kubernetes.io/projected/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-kube-api-access-d277b\") pod \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\" (UID: \"be1c7cb2-81f8-483a-8abe-2c8f3968ad77\") " Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.208040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e3b03e-1157-4dfe-b594-57b16e70243a-operator-scripts\") pod \"62e3b03e-1157-4dfe-b594-57b16e70243a\" (UID: \"62e3b03e-1157-4dfe-b594-57b16e70243a\") " Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.209022 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62e3b03e-1157-4dfe-b594-57b16e70243a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62e3b03e-1157-4dfe-b594-57b16e70243a" (UID: "62e3b03e-1157-4dfe-b594-57b16e70243a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.209440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be1c7cb2-81f8-483a-8abe-2c8f3968ad77" (UID: "be1c7cb2-81f8-483a-8abe-2c8f3968ad77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.212792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e3b03e-1157-4dfe-b594-57b16e70243a-kube-api-access-hg4zs" (OuterVolumeSpecName: "kube-api-access-hg4zs") pod "62e3b03e-1157-4dfe-b594-57b16e70243a" (UID: "62e3b03e-1157-4dfe-b594-57b16e70243a"). InnerVolumeSpecName "kube-api-access-hg4zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.213922 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-kube-api-access-d277b" (OuterVolumeSpecName: "kube-api-access-d277b") pod "be1c7cb2-81f8-483a-8abe-2c8f3968ad77" (UID: "be1c7cb2-81f8-483a-8abe-2c8f3968ad77"). InnerVolumeSpecName "kube-api-access-d277b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.310028 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg4zs\" (UniqueName: \"kubernetes.io/projected/62e3b03e-1157-4dfe-b594-57b16e70243a-kube-api-access-hg4zs\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.310069 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d277b\" (UniqueName: \"kubernetes.io/projected/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-kube-api-access-d277b\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.310081 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62e3b03e-1157-4dfe-b594-57b16e70243a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.310091 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be1c7cb2-81f8-483a-8abe-2c8f3968ad77-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.683309 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-9d42-account-create-update-272c8" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.683302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-9d42-account-create-update-272c8" event={"ID":"be1c7cb2-81f8-483a-8abe-2c8f3968ad77","Type":"ContainerDied","Data":"861fe720cd4d95ea13323329a373143a807786ee47de32d0e19a57f6d9785b62"} Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.683386 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861fe720cd4d95ea13323329a373143a807786ee47de32d0e19a57f6d9785b62" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.684795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-84sxb" event={"ID":"62e3b03e-1157-4dfe-b594-57b16e70243a","Type":"ContainerDied","Data":"f643e7bc328cd8d16b40dd700be6d8448d3d7b7db7554421cf7ad1acf93a1a51"} Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.684862 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-84sxb" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.684893 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f643e7bc328cd8d16b40dd700be6d8448d3d7b7db7554421cf7ad1acf93a1a51" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.686244 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" event={"ID":"322a7082-a7b1-4eed-a9b7-6ecad109cb76","Type":"ContainerStarted","Data":"831df3f785b1c9d6270168097a866bd797156c7bf8c05deed25fc1711304b623"} Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.686449 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.687859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c598-account-create-update-7mdk8" event={"ID":"7d0ff391-7201-49f7-be8b-21d096449ae7","Type":"ContainerDied","Data":"f2d82c0a28dec3e8f4aba13fd38d2491d5d0f2be87c4c5e245d7c8225a6ba54d"} Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.687892 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d82c0a28dec3e8f4aba13fd38d2491d5d0f2be87c4c5e245d7c8225a6ba54d" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.687908 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c598-account-create-update-7mdk8" Dec 05 14:14:47 crc kubenswrapper[4858]: I1205 14:14:47.725527 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podStartSLOduration=3.725499881 podStartE2EDuration="3.725499881s" podCreationTimestamp="2025-12-05 14:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:14:47.713027625 +0000 UTC m=+1096.260625764" watchObservedRunningTime="2025-12-05 14:14:47.725499881 +0000 UTC m=+1096.273098020" Dec 05 14:14:52 crc kubenswrapper[4858]: I1205 14:14:52.727534 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6n4wj" event={"ID":"5f5aace7-7479-454e-b9c3-c83f492b0786","Type":"ContainerStarted","Data":"a367c902d2b57ae002427b5fe377ba1ca8489d79024410aee8b85b0e36323201"} Dec 05 14:14:52 crc kubenswrapper[4858]: I1205 14:14:52.748571 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6n4wj" podStartSLOduration=2.7710884140000003 podStartE2EDuration="10.748554202s" podCreationTimestamp="2025-12-05 14:14:42 +0000 UTC" firstStartedPulling="2025-12-05 14:14:43.996252479 +0000 UTC m=+1092.543850618" lastFinishedPulling="2025-12-05 14:14:51.973718267 +0000 UTC m=+1100.521316406" observedRunningTime="2025-12-05 14:14:52.746307991 +0000 UTC m=+1101.293906140" watchObservedRunningTime="2025-12-05 14:14:52.748554202 +0000 UTC m=+1101.296152341" Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.346899 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.420248 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768c5cd5f7-4pfv4"] Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.420515 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerName="dnsmasq-dns" containerID="cri-o://c437a28377e814bd2e0cd420838385ae30183cf8db086cc453bd6966683ee0cc" gracePeriod=10 Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.758401 4858 generic.go:334] "Generic (PLEG): container finished" podID="5f5aace7-7479-454e-b9c3-c83f492b0786" containerID="a367c902d2b57ae002427b5fe377ba1ca8489d79024410aee8b85b0e36323201" exitCode=0 Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.758613 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6n4wj" event={"ID":"5f5aace7-7479-454e-b9c3-c83f492b0786","Type":"ContainerDied","Data":"a367c902d2b57ae002427b5fe377ba1ca8489d79024410aee8b85b0e36323201"} Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.765704 4858 generic.go:334] "Generic (PLEG): container finished" podID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerID="c437a28377e814bd2e0cd420838385ae30183cf8db086cc453bd6966683ee0cc" exitCode=0 Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.765762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" event={"ID":"836ed005-6d52-439f-8a6c-8bdd848fbb4f","Type":"ContainerDied","Data":"c437a28377e814bd2e0cd420838385ae30183cf8db086cc453bd6966683ee0cc"} Dec 05 14:14:55 crc kubenswrapper[4858]: I1205 14:14:55.924337 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.056725 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-svc\") pod \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.056808 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjmgt\" (UniqueName: \"kubernetes.io/projected/836ed005-6d52-439f-8a6c-8bdd848fbb4f-kube-api-access-pjmgt\") pod \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.056921 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-config\") pod \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.056975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-swift-storage-0\") pod \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.057035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-nb\") pod \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.057076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-sb\") pod \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\" (UID: \"836ed005-6d52-439f-8a6c-8bdd848fbb4f\") " Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.077772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836ed005-6d52-439f-8a6c-8bdd848fbb4f-kube-api-access-pjmgt" (OuterVolumeSpecName: "kube-api-access-pjmgt") pod "836ed005-6d52-439f-8a6c-8bdd848fbb4f" (UID: "836ed005-6d52-439f-8a6c-8bdd848fbb4f"). InnerVolumeSpecName "kube-api-access-pjmgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.120358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-config" (OuterVolumeSpecName: "config") pod "836ed005-6d52-439f-8a6c-8bdd848fbb4f" (UID: "836ed005-6d52-439f-8a6c-8bdd848fbb4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.124353 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "836ed005-6d52-439f-8a6c-8bdd848fbb4f" (UID: "836ed005-6d52-439f-8a6c-8bdd848fbb4f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.134131 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "836ed005-6d52-439f-8a6c-8bdd848fbb4f" (UID: "836ed005-6d52-439f-8a6c-8bdd848fbb4f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.149729 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "836ed005-6d52-439f-8a6c-8bdd848fbb4f" (UID: "836ed005-6d52-439f-8a6c-8bdd848fbb4f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.155400 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "836ed005-6d52-439f-8a6c-8bdd848fbb4f" (UID: "836ed005-6d52-439f-8a6c-8bdd848fbb4f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.159360 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.159518 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.159610 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.159778 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjmgt\" (UniqueName: \"kubernetes.io/projected/836ed005-6d52-439f-8a6c-8bdd848fbb4f-kube-api-access-pjmgt\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.159907 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.160018 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/836ed005-6d52-439f-8a6c-8bdd848fbb4f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.775763 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.777127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-768c5cd5f7-4pfv4" event={"ID":"836ed005-6d52-439f-8a6c-8bdd848fbb4f","Type":"ContainerDied","Data":"192d68516bd8c9c87e5cc637f48e237e99222e93022e6ab1c003e8b5f5a354a5"} Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.777228 4858 scope.go:117] "RemoveContainer" containerID="c437a28377e814bd2e0cd420838385ae30183cf8db086cc453bd6966683ee0cc" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.820623 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-768c5cd5f7-4pfv4"] Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.821435 4858 scope.go:117] "RemoveContainer" containerID="f9eea525a3924e2e2df72be9bbbff99540f8d50df6eb70061584e4f0453a7996" Dec 05 14:14:56 crc kubenswrapper[4858]: I1205 14:14:56.829306 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-768c5cd5f7-4pfv4"] Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.134096 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.277573 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-config-data\") pod \"5f5aace7-7479-454e-b9c3-c83f492b0786\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.277946 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmrj7\" (UniqueName: \"kubernetes.io/projected/5f5aace7-7479-454e-b9c3-c83f492b0786-kube-api-access-dmrj7\") pod \"5f5aace7-7479-454e-b9c3-c83f492b0786\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.278077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-combined-ca-bundle\") pod \"5f5aace7-7479-454e-b9c3-c83f492b0786\" (UID: \"5f5aace7-7479-454e-b9c3-c83f492b0786\") " Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.287096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5aace7-7479-454e-b9c3-c83f492b0786-kube-api-access-dmrj7" (OuterVolumeSpecName: "kube-api-access-dmrj7") pod "5f5aace7-7479-454e-b9c3-c83f492b0786" (UID: "5f5aace7-7479-454e-b9c3-c83f492b0786"). InnerVolumeSpecName "kube-api-access-dmrj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.301371 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f5aace7-7479-454e-b9c3-c83f492b0786" (UID: "5f5aace7-7479-454e-b9c3-c83f492b0786"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.322809 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-config-data" (OuterVolumeSpecName: "config-data") pod "5f5aace7-7479-454e-b9c3-c83f492b0786" (UID: "5f5aace7-7479-454e-b9c3-c83f492b0786"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.379977 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.380015 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f5aace7-7479-454e-b9c3-c83f492b0786-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.380028 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmrj7\" (UniqueName: \"kubernetes.io/projected/5f5aace7-7479-454e-b9c3-c83f492b0786-kube-api-access-dmrj7\") on node \"crc\" DevicePath \"\"" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.799890 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6n4wj" event={"ID":"5f5aace7-7479-454e-b9c3-c83f492b0786","Type":"ContainerDied","Data":"914734c23c01a67feec2a33540a37747eaa54e2b3aad717ff68ab084aa5dfa62"} Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.799928 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="914734c23c01a67feec2a33540a37747eaa54e2b3aad717ff68ab084aa5dfa62" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.799934 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6n4wj" Dec 05 14:14:57 crc kubenswrapper[4858]: I1205 14:14:57.909901 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" path="/var/lib/kubelet/pods/836ed005-6d52-439f-8a6c-8bdd848fbb4f/volumes" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.081937 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6pt5l"] Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082294 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerName="init" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082311 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerName="init" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082320 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768a8643-81f7-42cf-a720-3e5daed8bba6" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082326 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="768a8643-81f7-42cf-a720-3e5daed8bba6" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082341 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0ff391-7201-49f7-be8b-21d096449ae7" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082349 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0ff391-7201-49f7-be8b-21d096449ae7" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082368 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5aace7-7479-454e-b9c3-c83f492b0786" containerName="keystone-db-sync" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082375 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5aace7-7479-454e-b9c3-c83f492b0786" containerName="keystone-db-sync" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082390 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerName="dnsmasq-dns" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082399 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerName="dnsmasq-dns" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082409 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbb7f6b-3583-45c9-bac1-08b968e84700" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbb7f6b-3583-45c9-bac1-08b968e84700" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082423 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be1c7cb2-81f8-483a-8abe-2c8f3968ad77" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082428 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="be1c7cb2-81f8-483a-8abe-2c8f3968ad77" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082442 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479f0846-6832-4c62-9791-cde613d23000" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082448 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="479f0846-6832-4c62-9791-cde613d23000" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e3b03e-1157-4dfe-b594-57b16e70243a" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082471 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e3b03e-1157-4dfe-b594-57b16e70243a" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082481 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ef5601-b86a-456e-bad7-e713c17fa711" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082487 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ef5601-b86a-456e-bad7-e713c17fa711" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: E1205 14:14:58.082503 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b28c893-a052-4412-8f85-112a1cd06861" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082510 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b28c893-a052-4412-8f85-112a1cd06861" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082696 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0ff391-7201-49f7-be8b-21d096449ae7" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082710 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e3b03e-1157-4dfe-b594-57b16e70243a" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082721 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b28c893-a052-4412-8f85-112a1cd06861" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082732 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ef5601-b86a-456e-bad7-e713c17fa711" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082743 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="768a8643-81f7-42cf-a720-3e5daed8bba6" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082755 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5aace7-7479-454e-b9c3-c83f492b0786" containerName="keystone-db-sync" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082768 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="836ed005-6d52-439f-8a6c-8bdd848fbb4f" containerName="dnsmasq-dns" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082778 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="479f0846-6832-4c62-9791-cde613d23000" containerName="mariadb-database-create" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082790 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="be1c7cb2-81f8-483a-8abe-2c8f3968ad77" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.082799 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbb7f6b-3583-45c9-bac1-08b968e84700" containerName="mariadb-account-create-update" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.083317 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.091023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-fernet-keys\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.091072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-config-data\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.091145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-credential-keys\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.091180 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-scripts\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.091237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8bwv\" (UniqueName: \"kubernetes.io/projected/da917591-312f-4f37-826f-3e565d811b1e-kube-api-access-v8bwv\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.091273 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-combined-ca-bundle\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.099630 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.100009 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.101148 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbtl5" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.101320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.103264 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.122928 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d57f7778f-582x6"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.124364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.144540 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6pt5l"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.156688 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d57f7778f-582x6"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8bwv\" (UniqueName: \"kubernetes.io/projected/da917591-312f-4f37-826f-3e565d811b1e-kube-api-access-v8bwv\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192379 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-svc\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192405 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-combined-ca-bundle\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8799\" (UniqueName: \"kubernetes.io/projected/1c343cd2-fe99-4faf-a907-179bc398516a-kube-api-access-q8799\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192520 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-fernet-keys\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192570 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-config-data\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-config\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192646 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-credential-keys\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192665 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-scripts\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.192680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-swift-storage-0\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.206983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-fernet-keys\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.207669 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-config-data\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.212937 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-combined-ca-bundle\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.217045 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-credential-keys\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.234711 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-glkkv"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.235968 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.241782 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-scripts\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.243046 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-kkl74" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.243296 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.295867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-config\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.295933 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-swift-storage-0\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.296019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-svc\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.296059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8799\" (UniqueName: \"kubernetes.io/projected/1c343cd2-fe99-4faf-a907-179bc398516a-kube-api-access-q8799\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.296088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.296168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.297127 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.297721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-config\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.298373 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-swift-storage-0\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.300814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.305582 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-svc\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.322784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8bwv\" (UniqueName: \"kubernetes.io/projected/da917591-312f-4f37-826f-3e565d811b1e-kube-api-access-v8bwv\") pod \"keystone-bootstrap-6pt5l\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.385620 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8799\" (UniqueName: \"kubernetes.io/projected/1c343cd2-fe99-4faf-a907-179bc398516a-kube-api-access-q8799\") pod \"dnsmasq-dns-6d57f7778f-582x6\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.399972 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.402074 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-glkkv"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.415059 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-combined-ca-bundle\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.426883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-config-data\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.426911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klfr6\" (UniqueName: \"kubernetes.io/projected/9be96efe-970b-4639-8744-3e63a0abfbd6-kube-api-access-klfr6\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.444171 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.500139 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-fbkbh"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.501246 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.506906 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fbkbh"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.508363 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kbgwq" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.509703 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.519891 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-fp96h"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.535842 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.536906 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-config-data\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.536937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-db-sync-config-data\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.536960 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-etc-machine-id\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.536982 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f298f\" (UniqueName: \"kubernetes.io/projected/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-kube-api-access-f298f\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.537009 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-combined-ca-bundle\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.537034 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-scripts\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.537055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-combined-ca-bundle\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.537103 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-config-data\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.537127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klfr6\" (UniqueName: \"kubernetes.io/projected/9be96efe-970b-4639-8744-3e63a0abfbd6-kube-api-access-klfr6\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.540189 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.566601 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-combined-ca-bundle\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.567532 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-config-data\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.579562 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-575f67464c-nsrld"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.580948 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7ptfj" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.581029 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.581161 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.590002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.594883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klfr6\" (UniqueName: \"kubernetes.io/projected/9be96efe-970b-4639-8744-3e63a0abfbd6-kube-api-access-klfr6\") pod \"heat-db-sync-glkkv\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.603266 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.603533 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.603732 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jhczg" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.604066 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.617285 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fp96h"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638424 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-scripts\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlp86\" (UniqueName: \"kubernetes.io/projected/f11e2282-12af-4a8d-8f16-eab320d07d4e-kube-api-access-zlp86\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-config-data\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638557 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-combined-ca-bundle\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-config\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-horizon-secret-key\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638627 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-scripts\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638648 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbp84\" (UniqueName: \"kubernetes.io/projected/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-kube-api-access-rbp84\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638666 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-logs\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638692 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-config-data\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-db-sync-config-data\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-etc-machine-id\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638751 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f298f\" (UniqueName: \"kubernetes.io/projected/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-kube-api-access-f298f\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.638777 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-combined-ca-bundle\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.642065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-combined-ca-bundle\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.643010 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-575f67464c-nsrld"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.651985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-etc-machine-id\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.652477 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-glkkv" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.659262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-scripts\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.662737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-config-data\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.670367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-db-sync-config-data\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.700383 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f298f\" (UniqueName: \"kubernetes.io/projected/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-kube-api-access-f298f\") pod \"cinder-db-sync-fbkbh\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740282 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-combined-ca-bundle\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740312 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-horizon-secret-key\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-config\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740374 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-scripts\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740394 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbp84\" (UniqueName: \"kubernetes.io/projected/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-kube-api-access-rbp84\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740409 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-logs\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlp86\" (UniqueName: \"kubernetes.io/projected/f11e2282-12af-4a8d-8f16-eab320d07d4e-kube-api-access-zlp86\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.740507 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-config-data\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.749098 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-config-data\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.749376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-scripts\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.749639 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-logs\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.752376 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-combined-ca-bundle\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.758202 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-horizon-secret-key\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.767934 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-config\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.796755 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlp86\" (UniqueName: \"kubernetes.io/projected/f11e2282-12af-4a8d-8f16-eab320d07d4e-kube-api-access-zlp86\") pod \"neutron-db-sync-fp96h\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.820316 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.822767 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.836723 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.836905 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.837714 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.841694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.841757 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-log-httpd\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.841794 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g47j\" (UniqueName: \"kubernetes.io/projected/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-kube-api-access-9g47j\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.841907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-config-data\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.841940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-run-httpd\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.841981 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-scripts\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.842008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.873544 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbp84\" (UniqueName: \"kubernetes.io/projected/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-kube-api-access-rbp84\") pod \"horizon-575f67464c-nsrld\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.898968 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s8q57"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.900071 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.904341 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-75p2t" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.904661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.904816 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.932626 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d57f7778f-582x6"] Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.942731 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-run-httpd\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.942963 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-scripts\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943074 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-logs\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943422 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrm8\" (UniqueName: \"kubernetes.io/projected/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-kube-api-access-mnrm8\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-log-httpd\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943597 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g47j\" (UniqueName: \"kubernetes.io/projected/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-kube-api-access-9g47j\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-combined-ca-bundle\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-scripts\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943892 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-config-data\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.945006 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-log-httpd\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.943513 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-run-httpd\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.956806 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:58 crc kubenswrapper[4858]: I1205 14:14:58.996439 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.002588 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-scripts\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.008868 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.009737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-config-data\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.014113 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s8q57"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.046581 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.046637 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrm8\" (UniqueName: \"kubernetes.io/projected/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-kube-api-access-mnrm8\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.046687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-combined-ca-bundle\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.046711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-scripts\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.046776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-logs\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.047665 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-logs\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.056302 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fp96h" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.057538 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-combined-ca-bundle\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.067645 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.068939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g47j\" (UniqueName: \"kubernetes.io/projected/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-kube-api-access-9g47j\") pod \"ceilometer-0\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " pod="openstack/ceilometer-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.072750 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-scripts\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.076477 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.080764 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f889b9ff-thbrt"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.082855 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.127696 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrm8\" (UniqueName: \"kubernetes.io/projected/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-kube-api-access-mnrm8\") pod \"placement-db-sync-s8q57\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.145134 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.147227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.153522 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-nb\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.153650 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-svc\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.153735 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-swift-storage-0\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.153848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-sb\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.153892 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnrg\" (UniqueName: \"kubernetes.io/projected/303565f4-49fa-4a41-9884-c801202229cb-kube-api-access-fvnrg\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.153917 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-config\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.172955 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.173331 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.176022 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.176257 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tfbpg" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.180875 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.192844 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.200188 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-5f99f"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.201330 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.206200 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-phngb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.206451 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.216263 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f889b9ff-thbrt"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.252188 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s8q57" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.264970 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5f99f"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.265873 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-sb\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.265905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnrg\" (UniqueName: \"kubernetes.io/projected/303565f4-49fa-4a41-9884-c801202229cb-kube-api-access-fvnrg\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.265928 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-config\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.266005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-nb\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.266040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-svc\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.266071 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-swift-storage-0\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.266954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-swift-storage-0\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.267461 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-sb\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.268280 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-config\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.268794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-nb\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.269320 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-svc\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.287326 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnrg\" (UniqueName: \"kubernetes.io/projected/303565f4-49fa-4a41-9884-c801202229cb-kube-api-access-fvnrg\") pod \"dnsmasq-dns-69f889b9ff-thbrt\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.298092 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59df675d85-2pvbb"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.299928 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.333336 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59df675d85-2pvbb"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372538 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-db-sync-config-data\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-combined-ca-bundle\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqrr6\" (UniqueName: \"kubernetes.io/projected/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-kube-api-access-rqrr6\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372739 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-logs\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372774 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgp6\" (UniqueName: \"kubernetes.io/projected/945b1178-6672-45ba-bee9-335d1a2fec5c-kube-api-access-qbgp6\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372866 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372888 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-config-data\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372910 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-scripts\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.372933 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.424198 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477571 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwkb\" (UniqueName: \"kubernetes.io/projected/c1257de8-8700-4326-9443-c10295c6ad73-kube-api-access-txwkb\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477631 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-config-data\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-scripts\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477686 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-combined-ca-bundle\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1257de8-8700-4326-9443-c10295c6ad73-horizon-secret-key\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqrr6\" (UniqueName: \"kubernetes.io/projected/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-kube-api-access-rqrr6\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477741 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-logs\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbgp6\" (UniqueName: \"kubernetes.io/projected/945b1178-6672-45ba-bee9-335d1a2fec5c-kube-api-access-qbgp6\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477798 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477914 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477936 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-config-data\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-scripts\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477970 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.477990 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.478011 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1257de8-8700-4326-9443-c10295c6ad73-logs\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.478051 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-db-sync-config-data\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.482987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-db-sync-config-data\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.493033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.493493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-logs\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.493881 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.503315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-scripts\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.503425 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.504296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.511367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-config-data\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.520363 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6pt5l"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.521728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqrr6\" (UniqueName: \"kubernetes.io/projected/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-kube-api-access-rqrr6\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.522231 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbgp6\" (UniqueName: \"kubernetes.io/projected/945b1178-6672-45ba-bee9-335d1a2fec5c-kube-api-access-qbgp6\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.525429 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-combined-ca-bundle\") pod \"barbican-db-sync-5f99f\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.543635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f99f" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.581675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1257de8-8700-4326-9443-c10295c6ad73-horizon-secret-key\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.581814 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1257de8-8700-4326-9443-c10295c6ad73-logs\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.581910 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txwkb\" (UniqueName: \"kubernetes.io/projected/c1257de8-8700-4326-9443-c10295c6ad73-kube-api-access-txwkb\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.581938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-config-data\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.581955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-scripts\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.583025 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-scripts\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.586325 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1257de8-8700-4326-9443-c10295c6ad73-logs\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.591479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-config-data\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.594472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1257de8-8700-4326-9443-c10295c6ad73-horizon-secret-key\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.604689 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-glkkv"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.646842 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.648374 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.655064 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.655464 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.672794 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.675975 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.677662 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txwkb\" (UniqueName: \"kubernetes.io/projected/c1257de8-8700-4326-9443-c10295c6ad73-kube-api-access-txwkb\") pod \"horizon-59df675d85-2pvbb\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx7j7\" (UniqueName: \"kubernetes.io/projected/3bb2d017-3a44-4da1-9787-ba8e35d617de-kube-api-access-vx7j7\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796651 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-logs\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796771 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796795 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.796884 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.821352 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.883509 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pt5l" event={"ID":"da917591-312f-4f37-826f-3e565d811b1e","Type":"ContainerStarted","Data":"b4af1c3071a13b00a50556d1fc65ac34c40849950a05005f5677e5d853ef9014"} Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.901020 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-glkkv" event={"ID":"9be96efe-970b-4639-8744-3e63a0abfbd6","Type":"ContainerStarted","Data":"a63439eed6240ff963e2ab3c85f961f6598944d29253a424179a3f1f619e48d2"} Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914335 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx7j7\" (UniqueName: \"kubernetes.io/projected/3bb2d017-3a44-4da1-9787-ba8e35d617de-kube-api-access-vx7j7\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914369 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-logs\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914480 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.914535 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.915337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.915630 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.916672 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-logs\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.955809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.958554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.959018 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.966435 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.971250 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx7j7\" (UniqueName: \"kubernetes.io/projected/3bb2d017-3a44-4da1-9787-ba8e35d617de-kube-api-access-vx7j7\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:14:59 crc kubenswrapper[4858]: I1205 14:14:59.981190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.009022 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.048160 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-fbkbh"] Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.097666 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d57f7778f-582x6"] Dec 05 14:15:00 crc kubenswrapper[4858]: W1205 14:15:00.116698 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c343cd2_fe99_4faf_a907_179bc398516a.slice/crio-c12c722b0402f438e22e36478ff4959a50f4f84ed9b0beee7106a9f812e95d9f WatchSource:0}: Error finding container c12c722b0402f438e22e36478ff4959a50f4f84ed9b0beee7106a9f812e95d9f: Status 404 returned error can't find the container with id c12c722b0402f438e22e36478ff4959a50f4f84ed9b0beee7106a9f812e95d9f Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.168905 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.212507 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr"] Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.225984 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.243947 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.244214 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.273464 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr"] Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.299904 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-fp96h"] Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.327337 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072c3bf-87e4-4807-a14f-243c05c3e54d-config-volume\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.327401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzp4\" (UniqueName: \"kubernetes.io/projected/c072c3bf-87e4-4807-a14f-243c05c3e54d-kube-api-access-zzzp4\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:00 crc kubenswrapper[4858]: I1205 14:15:00.327437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c072c3bf-87e4-4807-a14f-243c05c3e54d-secret-volume\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:00 crc kubenswrapper[4858]: W1205 14:15:00.362627 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf11e2282_12af_4a8d_8f16_eab320d07d4e.slice/crio-5717408776b415176226d251b7c4c7e58edcfad7d8113969840ecfa0ace63871 WatchSource:0}: Error finding container 5717408776b415176226d251b7c4c7e58edcfad7d8113969840ecfa0ace63871: Status 404 returned error can't find the container with id 5717408776b415176226d251b7c4c7e58edcfad7d8113969840ecfa0ace63871 Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.430619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072c3bf-87e4-4807-a14f-243c05c3e54d-config-volume\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.430706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzp4\" (UniqueName: \"kubernetes.io/projected/c072c3bf-87e4-4807-a14f-243c05c3e54d-kube-api-access-zzzp4\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.430742 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c072c3bf-87e4-4807-a14f-243c05c3e54d-secret-volume\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.443067 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072c3bf-87e4-4807-a14f-243c05c3e54d-config-volume\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.446879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c072c3bf-87e4-4807-a14f-243c05c3e54d-secret-volume\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.486678 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzp4\" (UniqueName: \"kubernetes.io/projected/c072c3bf-87e4-4807-a14f-243c05c3e54d-kube-api-access-zzzp4\") pod \"collect-profiles-29415735-jrxnr\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.603906 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.721038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s8q57"] Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.875553 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-575f67464c-nsrld"] Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:00.916916 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.031975 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" event={"ID":"1c343cd2-fe99-4faf-a907-179bc398516a","Type":"ContainerStarted","Data":"7a48640985a44a48cb5cfc908a5aa337fa06580a467dfb94576e6c00f68ecd69"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.041725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" event={"ID":"1c343cd2-fe99-4faf-a907-179bc398516a","Type":"ContainerStarted","Data":"c12c722b0402f438e22e36478ff4959a50f4f84ed9b0beee7106a9f812e95d9f"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.061210 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s8q57" event={"ID":"9f8c113e-5e71-4e4f-a8c7-66caea8a6068","Type":"ContainerStarted","Data":"58b91776160cb12cd14d24f2515398b01230a7e687b2fd0ee7d52483aa74028f"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.080350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fbkbh" event={"ID":"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd","Type":"ContainerStarted","Data":"e15f6efc887bdc09c26b7231c0958e8a6aaa227d1ace67a45bba0ca27d8b3de0"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.095296 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fp96h" event={"ID":"f11e2282-12af-4a8d-8f16-eab320d07d4e","Type":"ContainerStarted","Data":"5717408776b415176226d251b7c4c7e58edcfad7d8113969840ecfa0ace63871"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.102495 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pt5l" event={"ID":"da917591-312f-4f37-826f-3e565d811b1e","Type":"ContainerStarted","Data":"dbb82e89de717b88543f98ac96946accb295f41533bf00e984ef5a1cc5feaabd"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.111988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bc8a2e-6170-4c4e-9289-ba46ae2768e8","Type":"ContainerStarted","Data":"626d8daf5fc92f24329df27ba269f8edc744f5d0d6c81d64b279c58b29bb4f38"} Dec 05 14:15:01 crc kubenswrapper[4858]: I1205 14:15:01.173287 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6pt5l" podStartSLOduration=3.173271219 podStartE2EDuration="3.173271219s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:01.171549222 +0000 UTC m=+1109.719147361" watchObservedRunningTime="2025-12-05 14:15:01.173271219 +0000 UTC m=+1109.720869408" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.172347 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c343cd2-fe99-4faf-a907-179bc398516a" containerID="7a48640985a44a48cb5cfc908a5aa337fa06580a467dfb94576e6c00f68ecd69" exitCode=0 Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.173041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" event={"ID":"1c343cd2-fe99-4faf-a907-179bc398516a","Type":"ContainerDied","Data":"7a48640985a44a48cb5cfc908a5aa337fa06580a467dfb94576e6c00f68ecd69"} Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.188436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fp96h" event={"ID":"f11e2282-12af-4a8d-8f16-eab320d07d4e","Type":"ContainerStarted","Data":"e758f9573494956522352e0feafda2d1e9cfbd869deec084d8d4586f528c2e50"} Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.193358 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575f67464c-nsrld" event={"ID":"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c","Type":"ContainerStarted","Data":"f731d6b3456bcd2721be00ef7f6299283449ebbe48afe1bb26d1b8b34e27decb"} Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.216875 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:02 crc kubenswrapper[4858]: W1205 14:15:02.234693 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod303565f4_49fa_4a41_9884_c801202229cb.slice/crio-ffe253cdca960cffe0bc0694ab8a2da3492b95b282feaa908b8dce982654dfb7 WatchSource:0}: Error finding container ffe253cdca960cffe0bc0694ab8a2da3492b95b282feaa908b8dce982654dfb7: Status 404 returned error can't find the container with id ffe253cdca960cffe0bc0694ab8a2da3492b95b282feaa908b8dce982654dfb7 Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.251964 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f889b9ff-thbrt"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.312702 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-575f67464c-nsrld"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.378204 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-5f99f"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.443455 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.471857 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-fp96h" podStartSLOduration=4.471817472 podStartE2EDuration="4.471817472s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:02.284571432 +0000 UTC m=+1110.832169571" watchObservedRunningTime="2025-12-05 14:15:02.471817472 +0000 UTC m=+1111.019415611" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.503312 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dc46bfdbc-6gbs5"] Dec 05 14:15:02 crc kubenswrapper[4858]: E1205 14:15:02.504082 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c343cd2-fe99-4faf-a907-179bc398516a" containerName="init" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.504096 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c343cd2-fe99-4faf-a907-179bc398516a" containerName="init" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.504280 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c343cd2-fe99-4faf-a907-179bc398516a" containerName="init" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.505250 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.515162 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc46bfdbc-6gbs5"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.523254 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531078 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-config\") pod \"1c343cd2-fe99-4faf-a907-179bc398516a\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531148 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-sb\") pod \"1c343cd2-fe99-4faf-a907-179bc398516a\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531210 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-svc\") pod \"1c343cd2-fe99-4faf-a907-179bc398516a\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531292 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8799\" (UniqueName: \"kubernetes.io/projected/1c343cd2-fe99-4faf-a907-179bc398516a-kube-api-access-q8799\") pod \"1c343cd2-fe99-4faf-a907-179bc398516a\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531310 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-nb\") pod \"1c343cd2-fe99-4faf-a907-179bc398516a\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531355 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-swift-storage-0\") pod \"1c343cd2-fe99-4faf-a907-179bc398516a\" (UID: \"1c343cd2-fe99-4faf-a907-179bc398516a\") " Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531575 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-config-data\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-scripts\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531640 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-logs\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531677 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxq9j\" (UniqueName: \"kubernetes.io/projected/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-kube-api-access-dxq9j\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.531861 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-horizon-secret-key\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.588539 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c343cd2-fe99-4faf-a907-179bc398516a-kube-api-access-q8799" (OuterVolumeSpecName: "kube-api-access-q8799") pod "1c343cd2-fe99-4faf-a907-179bc398516a" (UID: "1c343cd2-fe99-4faf-a907-179bc398516a"). InnerVolumeSpecName "kube-api-access-q8799". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.632973 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-config-data\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.633035 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-scripts\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.633055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-logs\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.633091 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxq9j\" (UniqueName: \"kubernetes.io/projected/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-kube-api-access-dxq9j\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.633132 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-horizon-secret-key\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.637581 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.638586 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8799\" (UniqueName: \"kubernetes.io/projected/1c343cd2-fe99-4faf-a907-179bc398516a-kube-api-access-q8799\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.638902 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-config" (OuterVolumeSpecName: "config") pod "1c343cd2-fe99-4faf-a907-179bc398516a" (UID: "1c343cd2-fe99-4faf-a907-179bc398516a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.639690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-scripts\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.639883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-config-data\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.641310 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-logs\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.656409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-horizon-secret-key\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.661436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c343cd2-fe99-4faf-a907-179bc398516a" (UID: "1c343cd2-fe99-4faf-a907-179bc398516a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.675724 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59df675d85-2pvbb"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.676148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c343cd2-fe99-4faf-a907-179bc398516a" (UID: "1c343cd2-fe99-4faf-a907-179bc398516a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.677574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxq9j\" (UniqueName: \"kubernetes.io/projected/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-kube-api-access-dxq9j\") pod \"horizon-5dc46bfdbc-6gbs5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.719113 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c343cd2-fe99-4faf-a907-179bc398516a" (UID: "1c343cd2-fe99-4faf-a907-179bc398516a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.748410 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.748452 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.748469 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.755748 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c343cd2-fe99-4faf-a907-179bc398516a" (UID: "1c343cd2-fe99-4faf-a907-179bc398516a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.780278 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.781604 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.843689 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.857154 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.882422 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c343cd2-fe99-4faf-a907-179bc398516a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:02 crc kubenswrapper[4858]: W1205 14:15:02.940260 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f5f8181_a6e6_4ec0_854c_83cdeded5b16.slice/crio-0246ead91517ebd76557a418e39d8b4868d1a75e0e4c205b394c86fea6b00e70 WatchSource:0}: Error finding container 0246ead91517ebd76557a418e39d8b4868d1a75e0e4c205b394c86fea6b00e70: Status 404 returned error can't find the container with id 0246ead91517ebd76557a418e39d8b4868d1a75e0e4c205b394c86fea6b00e70 Dec 05 14:15:02 crc kubenswrapper[4858]: I1205 14:15:02.959763 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.217088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f99f" event={"ID":"945b1178-6672-45ba-bee9-335d1a2fec5c","Type":"ContainerStarted","Data":"4df9e7f00d27d45ad81398e956714580d4421b7baa214207d1b858a8bac5d317"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.219385 4858 generic.go:334] "Generic (PLEG): container finished" podID="303565f4-49fa-4a41-9884-c801202229cb" containerID="ac84971d27886f81ae6af45f3b6c96072e8cdea8f5a28312299366d4aac1e083" exitCode=0 Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.219447 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" event={"ID":"303565f4-49fa-4a41-9884-c801202229cb","Type":"ContainerDied","Data":"ac84971d27886f81ae6af45f3b6c96072e8cdea8f5a28312299366d4aac1e083"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.219471 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" event={"ID":"303565f4-49fa-4a41-9884-c801202229cb","Type":"ContainerStarted","Data":"ffe253cdca960cffe0bc0694ab8a2da3492b95b282feaa908b8dce982654dfb7"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.225201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" event={"ID":"c072c3bf-87e4-4807-a14f-243c05c3e54d","Type":"ContainerStarted","Data":"20adf87e00a8052ba27956e6d02af25092a61387da6b4901469d98cc5d38f35a"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.229713 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59df675d85-2pvbb" event={"ID":"c1257de8-8700-4326-9443-c10295c6ad73","Type":"ContainerStarted","Data":"9f7090f8391dbbda9ceabaf9743cd9b5318e3e531d79d48c4a52d54f73311fd9"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.236522 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" event={"ID":"1c343cd2-fe99-4faf-a907-179bc398516a","Type":"ContainerDied","Data":"c12c722b0402f438e22e36478ff4959a50f4f84ed9b0beee7106a9f812e95d9f"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.236575 4858 scope.go:117] "RemoveContainer" containerID="7a48640985a44a48cb5cfc908a5aa337fa06580a467dfb94576e6c00f68ecd69" Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.236598 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d57f7778f-582x6" Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.242110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f5f8181-a6e6-4ec0-854c-83cdeded5b16","Type":"ContainerStarted","Data":"0246ead91517ebd76557a418e39d8b4868d1a75e0e4c205b394c86fea6b00e70"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.280402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bb2d017-3a44-4da1-9787-ba8e35d617de","Type":"ContainerStarted","Data":"fb1018266db75f8e2bb68366a0c965e3437a13602e38f76f603a19d5ce001d19"} Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.388740 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d57f7778f-582x6"] Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.406250 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d57f7778f-582x6"] Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.688255 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc46bfdbc-6gbs5"] Dec 05 14:15:03 crc kubenswrapper[4858]: W1205 14:15:03.737639 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9789e7f_de7c_44a6_9a33_683b8f9d99c5.slice/crio-f028a93ae11f38054a000cf4f9d20d13588d21ca95e2d5f3355e0bd503a5777c WatchSource:0}: Error finding container f028a93ae11f38054a000cf4f9d20d13588d21ca95e2d5f3355e0bd503a5777c: Status 404 returned error can't find the container with id f028a93ae11f38054a000cf4f9d20d13588d21ca95e2d5f3355e0bd503a5777c Dec 05 14:15:03 crc kubenswrapper[4858]: I1205 14:15:03.942716 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c343cd2-fe99-4faf-a907-179bc398516a" path="/var/lib/kubelet/pods/1c343cd2-fe99-4faf-a907-179bc398516a/volumes" Dec 05 14:15:04 crc kubenswrapper[4858]: I1205 14:15:04.299380 4858 generic.go:334] "Generic (PLEG): container finished" podID="c072c3bf-87e4-4807-a14f-243c05c3e54d" containerID="b946863cbe80dada0fde3fb478d5d5df9bae80ae7d13100ee9c4fd0913141e58" exitCode=0 Dec 05 14:15:04 crc kubenswrapper[4858]: I1205 14:15:04.299469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" event={"ID":"c072c3bf-87e4-4807-a14f-243c05c3e54d","Type":"ContainerDied","Data":"b946863cbe80dada0fde3fb478d5d5df9bae80ae7d13100ee9c4fd0913141e58"} Dec 05 14:15:04 crc kubenswrapper[4858]: I1205 14:15:04.306751 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc46bfdbc-6gbs5" event={"ID":"f9789e7f-de7c-44a6-9a33-683b8f9d99c5","Type":"ContainerStarted","Data":"f028a93ae11f38054a000cf4f9d20d13588d21ca95e2d5f3355e0bd503a5777c"} Dec 05 14:15:04 crc kubenswrapper[4858]: I1205 14:15:04.330873 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" event={"ID":"303565f4-49fa-4a41-9884-c801202229cb","Type":"ContainerStarted","Data":"03b65ed9d886af294a30dc7ae883fd1c935d17612c5da3b40714a50a31b4c17d"} Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.353313 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f5f8181-a6e6-4ec0-854c-83cdeded5b16","Type":"ContainerStarted","Data":"1b60ba235413a3ee397a55319db8620711d6a778a0c8a39b674472064f477fb8"} Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.355438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bb2d017-3a44-4da1-9787-ba8e35d617de","Type":"ContainerStarted","Data":"43afeb5b342fa532cf0f80180310630f1f561012b6536cab24ac8aefdc972799"} Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.355539 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.388069 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" podStartSLOduration=7.388049777 podStartE2EDuration="7.388049777s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:05.381004797 +0000 UTC m=+1113.928602936" watchObservedRunningTime="2025-12-05 14:15:05.388049777 +0000 UTC m=+1113.935647916" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.756436 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.841053 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c072c3bf-87e4-4807-a14f-243c05c3e54d-secret-volume\") pod \"c072c3bf-87e4-4807-a14f-243c05c3e54d\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.841250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072c3bf-87e4-4807-a14f-243c05c3e54d-config-volume\") pod \"c072c3bf-87e4-4807-a14f-243c05c3e54d\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.841852 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzp4\" (UniqueName: \"kubernetes.io/projected/c072c3bf-87e4-4807-a14f-243c05c3e54d-kube-api-access-zzzp4\") pod \"c072c3bf-87e4-4807-a14f-243c05c3e54d\" (UID: \"c072c3bf-87e4-4807-a14f-243c05c3e54d\") " Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.850001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c072c3bf-87e4-4807-a14f-243c05c3e54d-config-volume" (OuterVolumeSpecName: "config-volume") pod "c072c3bf-87e4-4807-a14f-243c05c3e54d" (UID: "c072c3bf-87e4-4807-a14f-243c05c3e54d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.852991 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c072c3bf-87e4-4807-a14f-243c05c3e54d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c072c3bf-87e4-4807-a14f-243c05c3e54d" (UID: "c072c3bf-87e4-4807-a14f-243c05c3e54d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.853367 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c072c3bf-87e4-4807-a14f-243c05c3e54d-kube-api-access-zzzp4" (OuterVolumeSpecName: "kube-api-access-zzzp4") pod "c072c3bf-87e4-4807-a14f-243c05c3e54d" (UID: "c072c3bf-87e4-4807-a14f-243c05c3e54d"). InnerVolumeSpecName "kube-api-access-zzzp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.944925 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzzp4\" (UniqueName: \"kubernetes.io/projected/c072c3bf-87e4-4807-a14f-243c05c3e54d-kube-api-access-zzzp4\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.944957 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c072c3bf-87e4-4807-a14f-243c05c3e54d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:05 crc kubenswrapper[4858]: I1205 14:15:05.944970 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c072c3bf-87e4-4807-a14f-243c05c3e54d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:06 crc kubenswrapper[4858]: I1205 14:15:06.368681 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" Dec 05 14:15:06 crc kubenswrapper[4858]: I1205 14:15:06.368903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr" event={"ID":"c072c3bf-87e4-4807-a14f-243c05c3e54d","Type":"ContainerDied","Data":"20adf87e00a8052ba27956e6d02af25092a61387da6b4901469d98cc5d38f35a"} Dec 05 14:15:06 crc kubenswrapper[4858]: I1205 14:15:06.369363 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20adf87e00a8052ba27956e6d02af25092a61387da6b4901469d98cc5d38f35a" Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.409202 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f5f8181-a6e6-4ec0-854c-83cdeded5b16","Type":"ContainerStarted","Data":"c3e461878fbd93abae24f67d129a4382f236eb34c3978e2252f075016cdd5e3d"} Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.410147 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-log" containerID="cri-o://1b60ba235413a3ee397a55319db8620711d6a778a0c8a39b674472064f477fb8" gracePeriod=30 Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.410749 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-httpd" containerID="cri-o://c3e461878fbd93abae24f67d129a4382f236eb34c3978e2252f075016cdd5e3d" gracePeriod=30 Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.442806 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.442785677 podStartE2EDuration="9.442785677s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:07.433222839 +0000 UTC m=+1115.980820968" watchObservedRunningTime="2025-12-05 14:15:07.442785677 +0000 UTC m=+1115.990383816" Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.447618 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bb2d017-3a44-4da1-9787-ba8e35d617de","Type":"ContainerStarted","Data":"d13d83534d069dc01f3832d85fd130ce46732ef13817173f0dd8acd4052e6829"} Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.447810 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-log" containerID="cri-o://43afeb5b342fa532cf0f80180310630f1f561012b6536cab24ac8aefdc972799" gracePeriod=30 Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.448248 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-httpd" containerID="cri-o://d13d83534d069dc01f3832d85fd130ce46732ef13817173f0dd8acd4052e6829" gracePeriod=30 Dec 05 14:15:07 crc kubenswrapper[4858]: I1205 14:15:07.483740 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.483722741 podStartE2EDuration="9.483722741s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:07.480475684 +0000 UTC m=+1116.028073823" watchObservedRunningTime="2025-12-05 14:15:07.483722741 +0000 UTC m=+1116.031320870" Dec 05 14:15:07 crc kubenswrapper[4858]: E1205 14:15:07.559687 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bb2d017_3a44_4da1_9787_ba8e35d617de.slice/crio-43afeb5b342fa532cf0f80180310630f1f561012b6536cab24ac8aefdc972799.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.219280 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59df675d85-2pvbb"] Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.298326 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66fd8d549b-n87dk"] Dec 05 14:15:08 crc kubenswrapper[4858]: E1205 14:15:08.298778 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c072c3bf-87e4-4807-a14f-243c05c3e54d" containerName="collect-profiles" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.298792 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c072c3bf-87e4-4807-a14f-243c05c3e54d" containerName="collect-profiles" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.299278 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c072c3bf-87e4-4807-a14f-243c05c3e54d" containerName="collect-profiles" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.300301 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.307576 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.326416 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66fd8d549b-n87dk"] Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.401912 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e91f9c-4d1e-4765-b609-32b5531066bf-logs\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.401967 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-scripts\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.401988 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-config-data\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.402010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-combined-ca-bundle\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.402032 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzp89\" (UniqueName: \"kubernetes.io/projected/f4e91f9c-4d1e-4765-b609-32b5531066bf-kube-api-access-wzp89\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.402064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-tls-certs\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.402103 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-secret-key\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.421774 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc46bfdbc-6gbs5"] Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.480125 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66fb787db8-jqwt8"] Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.482011 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.495189 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66fb787db8-jqwt8"] Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506654 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e91f9c-4d1e-4765-b609-32b5531066bf-logs\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506713 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-scripts\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506743 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-config-data\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506766 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-combined-ca-bundle\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506788 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzp89\" (UniqueName: \"kubernetes.io/projected/f4e91f9c-4d1e-4765-b609-32b5531066bf-kube-api-access-wzp89\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506840 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-tls-certs\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.506881 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-secret-key\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.519419 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-secret-key\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.520607 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-config-data\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.520894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e91f9c-4d1e-4765-b609-32b5531066bf-logs\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.521542 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-scripts\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.528668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-combined-ca-bundle\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.550609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-tls-certs\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.595373 4858 generic.go:334] "Generic (PLEG): container finished" podID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerID="c3e461878fbd93abae24f67d129a4382f236eb34c3978e2252f075016cdd5e3d" exitCode=0 Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.595414 4858 generic.go:334] "Generic (PLEG): container finished" podID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerID="1b60ba235413a3ee397a55319db8620711d6a778a0c8a39b674472064f477fb8" exitCode=143 Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.595512 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f5f8181-a6e6-4ec0-854c-83cdeded5b16","Type":"ContainerDied","Data":"c3e461878fbd93abae24f67d129a4382f236eb34c3978e2252f075016cdd5e3d"} Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.595549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f5f8181-a6e6-4ec0-854c-83cdeded5b16","Type":"ContainerDied","Data":"1b60ba235413a3ee397a55319db8620711d6a778a0c8a39b674472064f477fb8"} Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.607227 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzp89\" (UniqueName: \"kubernetes.io/projected/f4e91f9c-4d1e-4765-b609-32b5531066bf-kube-api-access-wzp89\") pod \"horizon-66fd8d549b-n87dk\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.620450 4858 generic.go:334] "Generic (PLEG): container finished" podID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerID="d13d83534d069dc01f3832d85fd130ce46732ef13817173f0dd8acd4052e6829" exitCode=0 Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.620489 4858 generic.go:334] "Generic (PLEG): container finished" podID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerID="43afeb5b342fa532cf0f80180310630f1f561012b6536cab24ac8aefdc972799" exitCode=143 Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.620513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bb2d017-3a44-4da1-9787-ba8e35d617de","Type":"ContainerDied","Data":"d13d83534d069dc01f3832d85fd130ce46732ef13817173f0dd8acd4052e6829"} Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.620544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bb2d017-3a44-4da1-9787-ba8e35d617de","Type":"ContainerDied","Data":"43afeb5b342fa532cf0f80180310630f1f561012b6536cab24ac8aefdc972799"} Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.625500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-combined-ca-bundle\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.625851 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-horizon-tls-certs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.626055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9929d39-1191-4732-a51f-16d2f973bf90-config-data\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.626459 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9929d39-1191-4732-a51f-16d2f973bf90-logs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.626557 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-horizon-secret-key\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.626623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9929d39-1191-4732-a51f-16d2f973bf90-scripts\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.626852 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgbs\" (UniqueName: \"kubernetes.io/projected/f9929d39-1191-4732-a51f-16d2f973bf90-kube-api-access-cxgbs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.652443 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9929d39-1191-4732-a51f-16d2f973bf90-logs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-horizon-secret-key\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729415 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9929d39-1191-4732-a51f-16d2f973bf90-scripts\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgbs\" (UniqueName: \"kubernetes.io/projected/f9929d39-1191-4732-a51f-16d2f973bf90-kube-api-access-cxgbs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729516 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-combined-ca-bundle\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729571 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-horizon-tls-certs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.729599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9929d39-1191-4732-a51f-16d2f973bf90-config-data\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.730770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9929d39-1191-4732-a51f-16d2f973bf90-config-data\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.731051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9929d39-1191-4732-a51f-16d2f973bf90-logs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.732015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9929d39-1191-4732-a51f-16d2f973bf90-scripts\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.737536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-horizon-tls-certs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.742960 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-combined-ca-bundle\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.753175 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9929d39-1191-4732-a51f-16d2f973bf90-horizon-secret-key\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.770857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgbs\" (UniqueName: \"kubernetes.io/projected/f9929d39-1191-4732-a51f-16d2f973bf90-kube-api-access-cxgbs\") pod \"horizon-66fb787db8-jqwt8\" (UID: \"f9929d39-1191-4732-a51f-16d2f973bf90\") " pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:08 crc kubenswrapper[4858]: I1205 14:15:08.959392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:09 crc kubenswrapper[4858]: I1205 14:15:09.428039 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:15:09 crc kubenswrapper[4858]: I1205 14:15:09.558375 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c64677f45-sx5vn"] Dec 05 14:15:09 crc kubenswrapper[4858]: I1205 14:15:09.558598 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" containerID="cri-o://831df3f785b1c9d6270168097a866bd797156c7bf8c05deed25fc1711304b623" gracePeriod=10 Dec 05 14:15:10 crc kubenswrapper[4858]: I1205 14:15:10.345649 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Dec 05 14:15:10 crc kubenswrapper[4858]: I1205 14:15:10.654984 4858 generic.go:334] "Generic (PLEG): container finished" podID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerID="831df3f785b1c9d6270168097a866bd797156c7bf8c05deed25fc1711304b623" exitCode=0 Dec 05 14:15:10 crc kubenswrapper[4858]: I1205 14:15:10.655027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" event={"ID":"322a7082-a7b1-4eed-a9b7-6ecad109cb76","Type":"ContainerDied","Data":"831df3f785b1c9d6270168097a866bd797156c7bf8c05deed25fc1711304b623"} Dec 05 14:15:12 crc kubenswrapper[4858]: I1205 14:15:12.682103 4858 generic.go:334] "Generic (PLEG): container finished" podID="da917591-312f-4f37-826f-3e565d811b1e" containerID="dbb82e89de717b88543f98ac96946accb295f41533bf00e984ef5a1cc5feaabd" exitCode=0 Dec 05 14:15:12 crc kubenswrapper[4858]: I1205 14:15:12.682572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pt5l" event={"ID":"da917591-312f-4f37-826f-3e565d811b1e","Type":"ContainerDied","Data":"dbb82e89de717b88543f98ac96946accb295f41533bf00e984ef5a1cc5feaabd"} Dec 05 14:15:15 crc kubenswrapper[4858]: I1205 14:15:15.345927 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Dec 05 14:15:20 crc kubenswrapper[4858]: I1205 14:15:20.345682 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Dec 05 14:15:20 crc kubenswrapper[4858]: I1205 14:15:20.346096 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:15:21 crc kubenswrapper[4858]: E1205 14:15:21.442836 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:21 crc kubenswrapper[4858]: E1205 14:15:21.443146 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:21 crc kubenswrapper[4858]: E1205 14:15:21.443285 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbgp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-5f99f_openstack(945b1178-6672-45ba-bee9-335d1a2fec5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:21 crc kubenswrapper[4858]: E1205 14:15:21.444466 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-5f99f" podUID="945b1178-6672-45ba-bee9-335d1a2fec5c" Dec 05 14:15:21 crc kubenswrapper[4858]: E1205 14:15:21.797317 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-barbican-api:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/barbican-db-sync-5f99f" podUID="945b1178-6672-45ba-bee9-335d1a2fec5c" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.875890 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-placement-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.876470 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-placement-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.876597 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-placement-api:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnrm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-s8q57_openstack(9f8c113e-5e71-4e4f-a8c7-66caea8a6068): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.877956 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-s8q57" podUID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.907182 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.907241 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.907381 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n99h79hbbh59bh64fh667h64ch598h7bh5f6hf5hdh7ch6fh59dh674hbbh694h94h659h7h577h5f6h96h576h575hcdhffhb4h566h5dh96q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxq9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dc46bfdbc-6gbs5_openstack(f9789e7f-de7c-44a6-9a33-683b8f9d99c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:24 crc kubenswrapper[4858]: E1205 14:15:24.909410 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"]" pod="openstack/horizon-5dc46bfdbc-6gbs5" podUID="f9789e7f-de7c-44a6-9a33-683b8f9d99c5" Dec 05 14:15:25 crc kubenswrapper[4858]: E1205 14:15:25.840172 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-placement-api:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/placement-db-sync-s8q57" podUID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" Dec 05 14:15:29 crc kubenswrapper[4858]: E1205 14:15:29.795905 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:29 crc kubenswrapper[4858]: E1205 14:15:29.796455 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:29 crc kubenswrapper[4858]: E1205 14:15:29.796587 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65dhb9h5ffh5f9h5c4h76h5b8h74h685h56h66h57hb7h684hd4h696h5cfh9h57fh598h65chcfhf5hcbh57ch5cch6fh658h5f5h68dh68bhbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txwkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-59df675d85-2pvbb_openstack(c1257de8-8700-4326-9443-c10295c6ad73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:29 crc kubenswrapper[4858]: E1205 14:15:29.798471 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-horizon:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"]" pod="openstack/horizon-59df675d85-2pvbb" podUID="c1257de8-8700-4326-9443-c10295c6ad73" Dec 05 14:15:29 crc kubenswrapper[4858]: I1205 14:15:29.822802 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 14:15:29 crc kubenswrapper[4858]: I1205 14:15:29.822876 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 14:15:29 crc kubenswrapper[4858]: I1205 14:15:29.869744 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6pt5l" event={"ID":"da917591-312f-4f37-826f-3e565d811b1e","Type":"ContainerDied","Data":"b4af1c3071a13b00a50556d1fc65ac34c40849950a05005f5677e5d853ef9014"} Dec 05 14:15:29 crc kubenswrapper[4858]: I1205 14:15:29.869785 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4af1c3071a13b00a50556d1fc65ac34c40849950a05005f5677e5d853ef9014" Dec 05 14:15:29 crc kubenswrapper[4858]: I1205 14:15:29.898734 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.022063 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-credential-keys\") pod \"da917591-312f-4f37-826f-3e565d811b1e\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.022121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8bwv\" (UniqueName: \"kubernetes.io/projected/da917591-312f-4f37-826f-3e565d811b1e-kube-api-access-v8bwv\") pod \"da917591-312f-4f37-826f-3e565d811b1e\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.022204 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-combined-ca-bundle\") pod \"da917591-312f-4f37-826f-3e565d811b1e\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.022244 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-scripts\") pod \"da917591-312f-4f37-826f-3e565d811b1e\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.022288 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-fernet-keys\") pod \"da917591-312f-4f37-826f-3e565d811b1e\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.022337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-config-data\") pod \"da917591-312f-4f37-826f-3e565d811b1e\" (UID: \"da917591-312f-4f37-826f-3e565d811b1e\") " Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.036940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da917591-312f-4f37-826f-3e565d811b1e" (UID: "da917591-312f-4f37-826f-3e565d811b1e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.043965 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da917591-312f-4f37-826f-3e565d811b1e" (UID: "da917591-312f-4f37-826f-3e565d811b1e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.068911 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-scripts" (OuterVolumeSpecName: "scripts") pod "da917591-312f-4f37-826f-3e565d811b1e" (UID: "da917591-312f-4f37-826f-3e565d811b1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.078880 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da917591-312f-4f37-826f-3e565d811b1e" (UID: "da917591-312f-4f37-826f-3e565d811b1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.080300 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-config-data" (OuterVolumeSpecName: "config-data") pod "da917591-312f-4f37-826f-3e565d811b1e" (UID: "da917591-312f-4f37-826f-3e565d811b1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.101963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da917591-312f-4f37-826f-3e565d811b1e-kube-api-access-v8bwv" (OuterVolumeSpecName: "kube-api-access-v8bwv") pod "da917591-312f-4f37-826f-3e565d811b1e" (UID: "da917591-312f-4f37-826f-3e565d811b1e"). InnerVolumeSpecName "kube-api-access-v8bwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.127956 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.127996 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.128014 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8bwv\" (UniqueName: \"kubernetes.io/projected/da917591-312f-4f37-826f-3e565d811b1e-kube-api-access-v8bwv\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.128027 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.128038 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.128047 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da917591-312f-4f37-826f-3e565d811b1e-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.169397 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.169454 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.348025 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Dec 05 14:15:30 crc kubenswrapper[4858]: I1205 14:15:30.877579 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6pt5l" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.012154 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6pt5l"] Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.023297 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6pt5l"] Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.066982 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h8ccs"] Dec 05 14:15:31 crc kubenswrapper[4858]: E1205 14:15:31.067467 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da917591-312f-4f37-826f-3e565d811b1e" containerName="keystone-bootstrap" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.067488 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da917591-312f-4f37-826f-3e565d811b1e" containerName="keystone-bootstrap" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.067697 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="da917591-312f-4f37-826f-3e565d811b1e" containerName="keystone-bootstrap" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.068429 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.071049 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.074151 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.074240 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.074267 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbtl5" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.074150 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.088579 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h8ccs"] Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.144064 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-combined-ca-bundle\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.144115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-scripts\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.144145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4h46\" (UniqueName: \"kubernetes.io/projected/1fd10daa-322e-4445-9671-d50447afa9d7-kube-api-access-l4h46\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.144438 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-fernet-keys\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.144546 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-config-data\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.144606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-credential-keys\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.246074 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-scripts\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.246154 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4h46\" (UniqueName: \"kubernetes.io/projected/1fd10daa-322e-4445-9671-d50447afa9d7-kube-api-access-l4h46\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.246253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-fernet-keys\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.246294 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-config-data\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.246332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-credential-keys\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.246389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-combined-ca-bundle\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.253752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-config-data\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.253778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-fernet-keys\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.260354 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-credential-keys\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.262001 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-scripts\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.263989 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-combined-ca-bundle\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.268703 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4h46\" (UniqueName: \"kubernetes.io/projected/1fd10daa-322e-4445-9671-d50447afa9d7-kube-api-access-l4h46\") pod \"keystone-bootstrap-h8ccs\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.387591 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:31 crc kubenswrapper[4858]: I1205 14:15:31.909851 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da917591-312f-4f37-826f-3e565d811b1e" path="/var/lib/kubelet/pods/da917591-312f-4f37-826f-3e565d811b1e/volumes" Dec 05 14:15:35 crc kubenswrapper[4858]: I1205 14:15:35.348758 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Dec 05 14:15:39 crc kubenswrapper[4858]: E1205 14:15:39.303029 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-ceilometer-central:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:39 crc kubenswrapper[4858]: E1205 14:15:39.303575 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-ceilometer-central:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:39 crc kubenswrapper[4858]: E1205 14:15:39.303719 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-ceilometer-central:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8bh68bh57bh66h686h568h658h557h5d5h59dh54bhcdh57dhc9h668h667h6bh677h69h669h5b8h64fh5b9hfbh587hb4h587hfch57h9ch5c7h579q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9g47j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(30bc8a2e-6170-4c4e-9289-ba46ae2768e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.399157 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.408742 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.418613 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.430784 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525569 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-internal-tls-certs\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525840 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-logs\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525858 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-combined-ca-bundle\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525875 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-logs\") pod \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9hfg\" (UniqueName: \"kubernetes.io/projected/322a7082-a7b1-4eed-a9b7-6ecad109cb76-kube-api-access-b9hfg\") pod \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-logs\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525935 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.525973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-swift-storage-0\") pod \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-scripts\") pod \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526048 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-horizon-secret-key\") pod \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-config-data\") pod \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526089 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqrr6\" (UniqueName: \"kubernetes.io/projected/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-kube-api-access-rqrr6\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526109 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-httpd-run\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-scripts\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-sb\") pod \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526179 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-config-data\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526208 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-public-tls-certs\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526234 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526275 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-combined-ca-bundle\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-scripts\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526540 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-config\") pod \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxq9j\" (UniqueName: \"kubernetes.io/projected/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-kube-api-access-dxq9j\") pod \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\" (UID: \"f9789e7f-de7c-44a6-9a33-683b8f9d99c5\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526605 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx7j7\" (UniqueName: \"kubernetes.io/projected/3bb2d017-3a44-4da1-9787-ba8e35d617de-kube-api-access-vx7j7\") pod \"3bb2d017-3a44-4da1-9787-ba8e35d617de\" (UID: \"3bb2d017-3a44-4da1-9787-ba8e35d617de\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526649 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-httpd-run\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526678 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-svc\") pod \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526716 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-config-data\") pod \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\" (UID: \"3f5f8181-a6e6-4ec0-854c-83cdeded5b16\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.526735 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-nb\") pod \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\" (UID: \"322a7082-a7b1-4eed-a9b7-6ecad109cb76\") " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.528272 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.542535 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb2d017-3a44-4da1-9787-ba8e35d617de-kube-api-access-vx7j7" (OuterVolumeSpecName: "kube-api-access-vx7j7") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "kube-api-access-vx7j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.542807 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-scripts" (OuterVolumeSpecName: "scripts") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.542928 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.543735 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-logs" (OuterVolumeSpecName: "logs") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.551502 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-scripts" (OuterVolumeSpecName: "scripts") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.557252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-logs" (OuterVolumeSpecName: "logs") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.557567 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-logs" (OuterVolumeSpecName: "logs") pod "f9789e7f-de7c-44a6-9a33-683b8f9d99c5" (UID: "f9789e7f-de7c-44a6-9a33-683b8f9d99c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.557758 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-scripts" (OuterVolumeSpecName: "scripts") pod "f9789e7f-de7c-44a6-9a33-683b8f9d99c5" (UID: "f9789e7f-de7c-44a6-9a33-683b8f9d99c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.557994 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-config-data" (OuterVolumeSpecName: "config-data") pod "f9789e7f-de7c-44a6-9a33-683b8f9d99c5" (UID: "f9789e7f-de7c-44a6-9a33-683b8f9d99c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.558043 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.564148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f9789e7f-de7c-44a6-9a33-683b8f9d99c5" (UID: "f9789e7f-de7c-44a6-9a33-683b8f9d99c5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.564322 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-kube-api-access-dxq9j" (OuterVolumeSpecName: "kube-api-access-dxq9j") pod "f9789e7f-de7c-44a6-9a33-683b8f9d99c5" (UID: "f9789e7f-de7c-44a6-9a33-683b8f9d99c5"). InnerVolumeSpecName "kube-api-access-dxq9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.568073 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-kube-api-access-rqrr6" (OuterVolumeSpecName: "kube-api-access-rqrr6") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "kube-api-access-rqrr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.568149 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322a7082-a7b1-4eed-a9b7-6ecad109cb76-kube-api-access-b9hfg" (OuterVolumeSpecName: "kube-api-access-b9hfg") pod "322a7082-a7b1-4eed-a9b7-6ecad109cb76" (UID: "322a7082-a7b1-4eed-a9b7-6ecad109cb76"). InnerVolumeSpecName "kube-api-access-b9hfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.573469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629413 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629454 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9hfg\" (UniqueName: \"kubernetes.io/projected/322a7082-a7b1-4eed-a9b7-6ecad109cb76-kube-api-access-b9hfg\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629465 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629474 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629505 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629515 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629527 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629536 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629544 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqrr6\" (UniqueName: \"kubernetes.io/projected/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-kube-api-access-rqrr6\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629553 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3bb2d017-3a44-4da1-9787-ba8e35d617de-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629561 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629575 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629583 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629592 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxq9j\" (UniqueName: \"kubernetes.io/projected/f9789e7f-de7c-44a6-9a33-683b8f9d99c5-kube-api-access-dxq9j\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629604 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx7j7\" (UniqueName: \"kubernetes.io/projected/3bb2d017-3a44-4da1-9787-ba8e35d617de-kube-api-access-vx7j7\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.629624 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.633679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-config" (OuterVolumeSpecName: "config") pod "322a7082-a7b1-4eed-a9b7-6ecad109cb76" (UID: "322a7082-a7b1-4eed-a9b7-6ecad109cb76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.635086 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.667407 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.679722 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.685743 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "322a7082-a7b1-4eed-a9b7-6ecad109cb76" (UID: "322a7082-a7b1-4eed-a9b7-6ecad109cb76"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.692425 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "322a7082-a7b1-4eed-a9b7-6ecad109cb76" (UID: "322a7082-a7b1-4eed-a9b7-6ecad109cb76"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.695325 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "322a7082-a7b1-4eed-a9b7-6ecad109cb76" (UID: "322a7082-a7b1-4eed-a9b7-6ecad109cb76"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.705218 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "322a7082-a7b1-4eed-a9b7-6ecad109cb76" (UID: "322a7082-a7b1-4eed-a9b7-6ecad109cb76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.708396 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.709038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.713582 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-config-data" (OuterVolumeSpecName: "config-data") pod "3bb2d017-3a44-4da1-9787-ba8e35d617de" (UID: "3bb2d017-3a44-4da1-9787-ba8e35d617de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.714632 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-config-data" (OuterVolumeSpecName: "config-data") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.721761 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3f5f8181-a6e6-4ec0-854c-83cdeded5b16" (UID: "3f5f8181-a6e6-4ec0-854c-83cdeded5b16"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731388 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731409 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731419 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731427 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731435 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731444 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731452 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731526 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f5f8181-a6e6-4ec0-854c-83cdeded5b16-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731535 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731545 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731554 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bb2d017-3a44-4da1-9787-ba8e35d617de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731565 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.731573 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/322a7082-a7b1-4eed-a9b7-6ecad109cb76-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.999386 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f5f8181-a6e6-4ec0-854c-83cdeded5b16","Type":"ContainerDied","Data":"0246ead91517ebd76557a418e39d8b4868d1a75e0e4c205b394c86fea6b00e70"} Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.999416 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:15:39 crc kubenswrapper[4858]: I1205 14:15:39.999444 4858 scope.go:117] "RemoveContainer" containerID="c3e461878fbd93abae24f67d129a4382f236eb34c3978e2252f075016cdd5e3d" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.003660 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3bb2d017-3a44-4da1-9787-ba8e35d617de","Type":"ContainerDied","Data":"fb1018266db75f8e2bb68366a0c965e3437a13602e38f76f603a19d5ce001d19"} Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.003802 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.008696 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc46bfdbc-6gbs5" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.008703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc46bfdbc-6gbs5" event={"ID":"f9789e7f-de7c-44a6-9a33-683b8f9d99c5","Type":"ContainerDied","Data":"f028a93ae11f38054a000cf4f9d20d13588d21ca95e2d5f3355e0bd503a5777c"} Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.012690 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" event={"ID":"322a7082-a7b1-4eed-a9b7-6ecad109cb76","Type":"ContainerDied","Data":"fd3c58f16b46c393aaab478c47dcc69136a0822537daf9c0543ed9ee7a726105"} Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.012782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.047891 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.062537 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.082166 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.097902 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.115885 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.116259 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-httpd" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116271 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-httpd" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.116283 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-httpd" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116289 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-httpd" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.116308 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116314 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.116326 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="init" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116331 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="init" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.116349 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-log" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116355 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-log" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.116363 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-log" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116369 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-log" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116527 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116541 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-log" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116551 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-httpd" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116564 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" containerName="glance-log" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.116569 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" containerName="glance-httpd" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.117527 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.120918 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.120978 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.121145 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klfr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-glkkv_openstack(9be96efe-970b-4639-8744-3e63a0abfbd6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:40 crc kubenswrapper[4858]: E1205 14:15:40.122303 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-glkkv" podUID="9be96efe-970b-4639-8744-3e63a0abfbd6" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.136196 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-tfbpg" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.136498 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.136607 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.136714 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.171685 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.173731 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.191198 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc46bfdbc-6gbs5"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.195841 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dc46bfdbc-6gbs5"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.263322 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.266260 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txwkb\" (UniqueName: \"kubernetes.io/projected/c1257de8-8700-4326-9443-c10295c6ad73-kube-api-access-txwkb\") pod \"c1257de8-8700-4326-9443-c10295c6ad73\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.266337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1257de8-8700-4326-9443-c10295c6ad73-horizon-secret-key\") pod \"c1257de8-8700-4326-9443-c10295c6ad73\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.266399 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-config-data\") pod \"c1257de8-8700-4326-9443-c10295c6ad73\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.266571 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-scripts\") pod \"c1257de8-8700-4326-9443-c10295c6ad73\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.266737 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1257de8-8700-4326-9443-c10295c6ad73-logs\") pod \"c1257de8-8700-4326-9443-c10295c6ad73\" (UID: \"c1257de8-8700-4326-9443-c10295c6ad73\") " Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.269306 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.271417 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-config-data" (OuterVolumeSpecName: "config-data") pod "c1257de8-8700-4326-9443-c10295c6ad73" (UID: "c1257de8-8700-4326-9443-c10295c6ad73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.275652 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-scripts" (OuterVolumeSpecName: "scripts") pod "c1257de8-8700-4326-9443-c10295c6ad73" (UID: "c1257de8-8700-4326-9443-c10295c6ad73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.277613 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.278436 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.278491 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.278556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279053 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279145 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279421 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfvdj\" (UniqueName: \"kubernetes.io/projected/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-kube-api-access-zfvdj\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279591 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.279610 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1257de8-8700-4326-9443-c10295c6ad73-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.278798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1257de8-8700-4326-9443-c10295c6ad73-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1257de8-8700-4326-9443-c10295c6ad73" (UID: "c1257de8-8700-4326-9443-c10295c6ad73"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.317861 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1257de8-8700-4326-9443-c10295c6ad73-logs" (OuterVolumeSpecName: "logs") pod "c1257de8-8700-4326-9443-c10295c6ad73" (UID: "c1257de8-8700-4326-9443-c10295c6ad73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.332119 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.340501 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.356556 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c64677f45-sx5vn" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.361644 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1257de8-8700-4326-9443-c10295c6ad73-kube-api-access-txwkb" (OuterVolumeSpecName: "kube-api-access-txwkb") pod "c1257de8-8700-4326-9443-c10295c6ad73" (UID: "c1257de8-8700-4326-9443-c10295c6ad73"). InnerVolumeSpecName "kube-api-access-txwkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.363916 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c64677f45-sx5vn"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383036 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383188 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg82r\" (UniqueName: \"kubernetes.io/projected/d51a537e-24d5-4083-8c7a-8e7abd0abd49-kube-api-access-hg82r\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383253 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-scripts\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383340 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383401 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfvdj\" (UniqueName: \"kubernetes.io/projected/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-kube-api-access-zfvdj\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383435 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-logs\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383494 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-config-data\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383532 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383564 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383592 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383636 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383693 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.383747 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.390692 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.391543 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.391619 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.396147 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1257de8-8700-4326-9443-c10295c6ad73-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.396189 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txwkb\" (UniqueName: \"kubernetes.io/projected/c1257de8-8700-4326-9443-c10295c6ad73-kube-api-access-txwkb\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.396203 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1257de8-8700-4326-9443-c10295c6ad73-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.396524 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.399623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.403056 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.403929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.406419 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c64677f45-sx5vn"] Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.435748 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfvdj\" (UniqueName: \"kubernetes.io/projected/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-kube-api-access-zfvdj\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.485058 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499437 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499515 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg82r\" (UniqueName: \"kubernetes.io/projected/d51a537e-24d5-4083-8c7a-8e7abd0abd49-kube-api-access-hg82r\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499631 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-scripts\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499651 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499694 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-logs\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499739 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-config-data\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.499769 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.500108 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.502490 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.503045 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-logs\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.506755 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.510875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-config-data\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.515743 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.525345 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.526886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-scripts\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.539872 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg82r\" (UniqueName: \"kubernetes.io/projected/d51a537e-24d5-4083-8c7a-8e7abd0abd49-kube-api-access-hg82r\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.546143 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " pod="openstack/glance-default-external-api-0" Dec 05 14:15:40 crc kubenswrapper[4858]: I1205 14:15:40.710514 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.023225 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59df675d85-2pvbb" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.023941 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59df675d85-2pvbb" event={"ID":"c1257de8-8700-4326-9443-c10295c6ad73","Type":"ContainerDied","Data":"9f7090f8391dbbda9ceabaf9743cd9b5318e3e531d79d48c4a52d54f73311fd9"} Dec 05 14:15:41 crc kubenswrapper[4858]: E1205 14:15:41.025150 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-heat-engine:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/heat-db-sync-glkkv" podUID="9be96efe-970b-4639-8744-3e63a0abfbd6" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.128732 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59df675d85-2pvbb"] Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.138795 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59df675d85-2pvbb"] Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.909777 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322a7082-a7b1-4eed-a9b7-6ecad109cb76" path="/var/lib/kubelet/pods/322a7082-a7b1-4eed-a9b7-6ecad109cb76/volumes" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.910793 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb2d017-3a44-4da1-9787-ba8e35d617de" path="/var/lib/kubelet/pods/3bb2d017-3a44-4da1-9787-ba8e35d617de/volumes" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.911485 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5f8181-a6e6-4ec0-854c-83cdeded5b16" path="/var/lib/kubelet/pods/3f5f8181-a6e6-4ec0-854c-83cdeded5b16/volumes" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.912695 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1257de8-8700-4326-9443-c10295c6ad73" path="/var/lib/kubelet/pods/c1257de8-8700-4326-9443-c10295c6ad73/volumes" Dec 05 14:15:41 crc kubenswrapper[4858]: I1205 14:15:41.913168 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9789e7f-de7c-44a6-9a33-683b8f9d99c5" path="/var/lib/kubelet/pods/f9789e7f-de7c-44a6-9a33-683b8f9d99c5/volumes" Dec 05 14:15:41 crc kubenswrapper[4858]: E1205 14:15:41.974141 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:41 crc kubenswrapper[4858]: E1205 14:15:41.974188 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:15:41 crc kubenswrapper[4858]: E1205 14:15:41.974318 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f298f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-fbkbh_openstack(aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:15:41 crc kubenswrapper[4858]: E1205 14:15:41.975662 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-fbkbh" podUID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.024458 4858 scope.go:117] "RemoveContainer" containerID="1b60ba235413a3ee397a55319db8620711d6a778a0c8a39b674472064f477fb8" Dec 05 14:15:42 crc kubenswrapper[4858]: E1205 14:15:42.071222 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-cinder-api:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/cinder-db-sync-fbkbh" podUID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.186294 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66fd8d549b-n87dk"] Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.580791 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66fb787db8-jqwt8"] Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.759538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h8ccs"] Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.782295 4858 scope.go:117] "RemoveContainer" containerID="d13d83534d069dc01f3832d85fd130ce46732ef13817173f0dd8acd4052e6829" Dec 05 14:15:42 crc kubenswrapper[4858]: W1205 14:15:42.813572 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4e91f9c_4d1e_4765_b609_32b5531066bf.slice/crio-30e2442e77542139b9d497cb150822d742443cdd973da72b02b3225afb2ac138 WatchSource:0}: Error finding container 30e2442e77542139b9d497cb150822d742443cdd973da72b02b3225afb2ac138: Status 404 returned error can't find the container with id 30e2442e77542139b9d497cb150822d742443cdd973da72b02b3225afb2ac138 Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.841081 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 14:15:42 crc kubenswrapper[4858]: I1205 14:15:42.860218 4858 scope.go:117] "RemoveContainer" containerID="43afeb5b342fa532cf0f80180310630f1f561012b6536cab24ac8aefdc972799" Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.065924 4858 scope.go:117] "RemoveContainer" containerID="831df3f785b1c9d6270168097a866bd797156c7bf8c05deed25fc1711304b623" Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.071423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8ccs" event={"ID":"1fd10daa-322e-4445-9671-d50447afa9d7","Type":"ContainerStarted","Data":"6e747b92ce414e7770ec8f3ab73a718bd8b1f5df9c3684699aa274ba09c95b8f"} Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.075796 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fb787db8-jqwt8" event={"ID":"f9929d39-1191-4732-a51f-16d2f973bf90","Type":"ContainerStarted","Data":"10fc5021f818b1c02a55052c18b407be98d656521d816fbe7ef3b4b0dba14e1b"} Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.083749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerStarted","Data":"30e2442e77542139b9d497cb150822d742443cdd973da72b02b3225afb2ac138"} Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.196134 4858 scope.go:117] "RemoveContainer" containerID="2b9287f6435080d6b22e4707bcc15ab7726e55b6988908610fb4b91024c83666" Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.349777 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:15:43 crc kubenswrapper[4858]: W1205 14:15:43.385953 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9217a0b0_fdbc_4a4b_8580_57e50d4240d6.slice/crio-8c771fc48da6d7066f7fcc7cef9fe6988b3b541d971661c63dcdd3f0c33e69d4 WatchSource:0}: Error finding container 8c771fc48da6d7066f7fcc7cef9fe6988b3b541d971661c63dcdd3f0c33e69d4: Status 404 returned error can't find the container with id 8c771fc48da6d7066f7fcc7cef9fe6988b3b541d971661c63dcdd3f0c33e69d4 Dec 05 14:15:43 crc kubenswrapper[4858]: I1205 14:15:43.460091 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.175436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerStarted","Data":"61f7dd3bef7baaad01301f499bc946fc6b7f67a00416e4a5dc1f0bf9d190b0df"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.176009 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerStarted","Data":"01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.202169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d51a537e-24d5-4083-8c7a-8e7abd0abd49","Type":"ContainerStarted","Data":"d894c9491fa27ab9ac494a470e659516bd4ebc0b8dc7b0925f0b4ee9822c3456"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.229521 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575f67464c-nsrld" event={"ID":"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c","Type":"ContainerStarted","Data":"fff9443d5f06d90d5763eef075c535599704e5911c68b84cf3f3a9a6ddd9ab9d"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.230879 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bc8a2e-6170-4c4e-9289-ba46ae2768e8","Type":"ContainerStarted","Data":"b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.252105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fb787db8-jqwt8" event={"ID":"f9929d39-1191-4732-a51f-16d2f973bf90","Type":"ContainerStarted","Data":"9cc2e264bced84320f6d89edeb2e0a6a0701f1a0697852d30b22f27176214254"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.269591 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f99f" event={"ID":"945b1178-6672-45ba-bee9-335d1a2fec5c","Type":"ContainerStarted","Data":"9ec7f3c7d56605d95fb866a8fd13d3cac9f348ecabe1632ff44025d37aced302"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.272250 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8ccs" event={"ID":"1fd10daa-322e-4445-9671-d50447afa9d7","Type":"ContainerStarted","Data":"43e75b7cf74f1bebb6928b8b904df33609c5a0614a452248da75d92c95f07020"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.280576 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s8q57" event={"ID":"9f8c113e-5e71-4e4f-a8c7-66caea8a6068","Type":"ContainerStarted","Data":"dd0c43c5b3f457cd61d776f4e91369fb64de4b8d51af3fa02bcae90fc1f9ef34"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.282988 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9217a0b0-fdbc-4a4b-8580-57e50d4240d6","Type":"ContainerStarted","Data":"8c771fc48da6d7066f7fcc7cef9fe6988b3b541d971661c63dcdd3f0c33e69d4"} Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.322931 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-5f99f" podStartSLOduration=5.84282965 podStartE2EDuration="46.322911633s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="2025-12-05 14:15:02.32196955 +0000 UTC m=+1110.869567689" lastFinishedPulling="2025-12-05 14:15:42.802051543 +0000 UTC m=+1151.349649672" observedRunningTime="2025-12-05 14:15:44.313236792 +0000 UTC m=+1152.860834941" watchObservedRunningTime="2025-12-05 14:15:44.322911633 +0000 UTC m=+1152.870509772" Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.326314 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66fd8d549b-n87dk" podStartSLOduration=36.326292704 podStartE2EDuration="36.326292704s" podCreationTimestamp="2025-12-05 14:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:44.235048393 +0000 UTC m=+1152.782646532" watchObservedRunningTime="2025-12-05 14:15:44.326292704 +0000 UTC m=+1152.873890843" Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.368474 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s8q57" podStartSLOduration=4.345274831 podStartE2EDuration="46.368455862s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="2025-12-05 14:15:00.782076488 +0000 UTC m=+1109.329674627" lastFinishedPulling="2025-12-05 14:15:42.805257519 +0000 UTC m=+1151.352855658" observedRunningTime="2025-12-05 14:15:44.354212047 +0000 UTC m=+1152.901810186" watchObservedRunningTime="2025-12-05 14:15:44.368455862 +0000 UTC m=+1152.916054001" Dec 05 14:15:44 crc kubenswrapper[4858]: I1205 14:15:44.391276 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h8ccs" podStartSLOduration=13.391264787 podStartE2EDuration="13.391264787s" podCreationTimestamp="2025-12-05 14:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:44.390921018 +0000 UTC m=+1152.938519157" watchObservedRunningTime="2025-12-05 14:15:44.391264787 +0000 UTC m=+1152.938862926" Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.320997 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9217a0b0-fdbc-4a4b-8580-57e50d4240d6","Type":"ContainerStarted","Data":"b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382"} Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.325005 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d51a537e-24d5-4083-8c7a-8e7abd0abd49","Type":"ContainerStarted","Data":"616ce3032bc1ee0e159ff6f2cae66db2cfe77778cf3071963e54c7fd762839f9"} Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.329593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575f67464c-nsrld" event={"ID":"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c","Type":"ContainerStarted","Data":"2d72059b626402160d5fe6efc207e1fdabb4d9e3e9be37933836671ce03f6f1d"} Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.329739 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-575f67464c-nsrld" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon-log" containerID="cri-o://fff9443d5f06d90d5763eef075c535599704e5911c68b84cf3f3a9a6ddd9ab9d" gracePeriod=30 Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.330234 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-575f67464c-nsrld" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon" containerID="cri-o://2d72059b626402160d5fe6efc207e1fdabb4d9e3e9be37933836671ce03f6f1d" gracePeriod=30 Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.360474 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-575f67464c-nsrld" podStartSLOduration=8.274680602 podStartE2EDuration="47.360454207s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="2025-12-05 14:15:00.948968359 +0000 UTC m=+1109.496566498" lastFinishedPulling="2025-12-05 14:15:40.034741964 +0000 UTC m=+1148.582340103" observedRunningTime="2025-12-05 14:15:45.358443143 +0000 UTC m=+1153.906041282" watchObservedRunningTime="2025-12-05 14:15:45.360454207 +0000 UTC m=+1153.908052346" Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.360730 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fb787db8-jqwt8" event={"ID":"f9929d39-1191-4732-a51f-16d2f973bf90","Type":"ContainerStarted","Data":"22d0a7a46fc3ae2b7828a4d3f6f59aa262bf8ed16cb09331868098f002150ec0"} Dec 05 14:15:45 crc kubenswrapper[4858]: I1205 14:15:45.396866 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66fb787db8-jqwt8" podStartSLOduration=37.396844809 podStartE2EDuration="37.396844809s" podCreationTimestamp="2025-12-05 14:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:45.382250995 +0000 UTC m=+1153.929849134" watchObservedRunningTime="2025-12-05 14:15:45.396844809 +0000 UTC m=+1153.944442948" Dec 05 14:15:46 crc kubenswrapper[4858]: I1205 14:15:46.370933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d51a537e-24d5-4083-8c7a-8e7abd0abd49","Type":"ContainerStarted","Data":"1259a84d33db1d12af20110a104f0023afa884db9b90923a2a0b1f0fa334f35a"} Dec 05 14:15:46 crc kubenswrapper[4858]: I1205 14:15:46.391918 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9217a0b0-fdbc-4a4b-8580-57e50d4240d6","Type":"ContainerStarted","Data":"d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba"} Dec 05 14:15:46 crc kubenswrapper[4858]: I1205 14:15:46.394483 4858 generic.go:334] "Generic (PLEG): container finished" podID="f11e2282-12af-4a8d-8f16-eab320d07d4e" containerID="e758f9573494956522352e0feafda2d1e9cfbd869deec084d8d4586f528c2e50" exitCode=0 Dec 05 14:15:46 crc kubenswrapper[4858]: I1205 14:15:46.395300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fp96h" event={"ID":"f11e2282-12af-4a8d-8f16-eab320d07d4e","Type":"ContainerDied","Data":"e758f9573494956522352e0feafda2d1e9cfbd869deec084d8d4586f528c2e50"} Dec 05 14:15:46 crc kubenswrapper[4858]: I1205 14:15:46.415916 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.415895194 podStartE2EDuration="6.415895194s" podCreationTimestamp="2025-12-05 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:46.403209132 +0000 UTC m=+1154.950807291" watchObservedRunningTime="2025-12-05 14:15:46.415895194 +0000 UTC m=+1154.963493353" Dec 05 14:15:46 crc kubenswrapper[4858]: I1205 14:15:46.431023 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.431007042 podStartE2EDuration="6.431007042s" podCreationTimestamp="2025-12-05 14:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:46.430904619 +0000 UTC m=+1154.978502768" watchObservedRunningTime="2025-12-05 14:15:46.431007042 +0000 UTC m=+1154.978605181" Dec 05 14:15:48 crc kubenswrapper[4858]: I1205 14:15:48.655224 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:48 crc kubenswrapper[4858]: I1205 14:15:48.655735 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:15:48 crc kubenswrapper[4858]: I1205 14:15:48.960354 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:48 crc kubenswrapper[4858]: I1205 14:15:48.961501 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:15:49 crc kubenswrapper[4858]: I1205 14:15:49.080004 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.431853 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-fp96h" event={"ID":"f11e2282-12af-4a8d-8f16-eab320d07d4e","Type":"ContainerDied","Data":"5717408776b415176226d251b7c4c7e58edcfad7d8113969840ecfa0ace63871"} Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.432615 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717408776b415176226d251b7c4c7e58edcfad7d8113969840ecfa0ace63871" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.433156 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fp96h" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.473533 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlp86\" (UniqueName: \"kubernetes.io/projected/f11e2282-12af-4a8d-8f16-eab320d07d4e-kube-api-access-zlp86\") pod \"f11e2282-12af-4a8d-8f16-eab320d07d4e\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.473752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-config\") pod \"f11e2282-12af-4a8d-8f16-eab320d07d4e\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.473894 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-combined-ca-bundle\") pod \"f11e2282-12af-4a8d-8f16-eab320d07d4e\" (UID: \"f11e2282-12af-4a8d-8f16-eab320d07d4e\") " Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.485739 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11e2282-12af-4a8d-8f16-eab320d07d4e-kube-api-access-zlp86" (OuterVolumeSpecName: "kube-api-access-zlp86") pod "f11e2282-12af-4a8d-8f16-eab320d07d4e" (UID: "f11e2282-12af-4a8d-8f16-eab320d07d4e"). InnerVolumeSpecName "kube-api-access-zlp86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.528138 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.528185 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.575835 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlp86\" (UniqueName: \"kubernetes.io/projected/f11e2282-12af-4a8d-8f16-eab320d07d4e-kube-api-access-zlp86\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.579903 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f11e2282-12af-4a8d-8f16-eab320d07d4e" (UID: "f11e2282-12af-4a8d-8f16-eab320d07d4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.580546 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.580956 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.584016 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-config" (OuterVolumeSpecName: "config") pod "f11e2282-12af-4a8d-8f16-eab320d07d4e" (UID: "f11e2282-12af-4a8d-8f16-eab320d07d4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.677751 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.677794 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11e2282-12af-4a8d-8f16-eab320d07d4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.711439 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.711481 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.781314 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 14:15:50 crc kubenswrapper[4858]: I1205 14:15:50.792236 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.442036 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-fp96h" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.444689 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.445677 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.445692 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.445703 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.778389 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dcc47cdbf-8v5zs"] Dec 05 14:15:51 crc kubenswrapper[4858]: E1205 14:15:51.781923 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11e2282-12af-4a8d-8f16-eab320d07d4e" containerName="neutron-db-sync" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.782102 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11e2282-12af-4a8d-8f16-eab320d07d4e" containerName="neutron-db-sync" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.782363 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11e2282-12af-4a8d-8f16-eab320d07d4e" containerName="neutron-db-sync" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.783440 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.888458 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c6dddcfdd-5kzc7"] Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.890202 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.900429 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.900416 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7ptfj" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.900705 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.900923 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.903127 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-svc\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.903162 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-nb\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.903203 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-config\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.903246 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp45v\" (UniqueName: \"kubernetes.io/projected/cd9d9950-37cb-4d6d-9d5e-4180e848883f-kube-api-access-sp45v\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.903265 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-sb\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.903304 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-swift-storage-0\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.941795 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6dddcfdd-5kzc7"] Dec 05 14:15:51 crc kubenswrapper[4858]: I1205 14:15:51.949331 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dcc47cdbf-8v5zs"] Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009334 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-config\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrppx\" (UniqueName: \"kubernetes.io/projected/a08a4143-92f7-4cc4-a600-a5449137a190-kube-api-access-qrppx\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-config\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009542 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp45v\" (UniqueName: \"kubernetes.io/projected/cd9d9950-37cb-4d6d-9d5e-4180e848883f-kube-api-access-sp45v\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009569 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-sb\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-httpd-config\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009662 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-combined-ca-bundle\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009716 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-swift-storage-0\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009760 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-ovndb-tls-certs\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-svc\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.009931 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-nb\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.010784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-nb\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.011493 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-swift-storage-0\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.011853 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-config\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.012298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-svc\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.012727 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-sb\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.042585 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp45v\" (UniqueName: \"kubernetes.io/projected/cd9d9950-37cb-4d6d-9d5e-4180e848883f-kube-api-access-sp45v\") pod \"dnsmasq-dns-5dcc47cdbf-8v5zs\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.115085 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-config\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.115832 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-httpd-config\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.115935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-combined-ca-bundle\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.116019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-ovndb-tls-certs\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.116151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrppx\" (UniqueName: \"kubernetes.io/projected/a08a4143-92f7-4cc4-a600-a5449137a190-kube-api-access-qrppx\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.115294 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.126767 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-config\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.130994 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-combined-ca-bundle\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.142813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-httpd-config\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.143392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-ovndb-tls-certs\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.152614 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrppx\" (UniqueName: \"kubernetes.io/projected/a08a4143-92f7-4cc4-a600-a5449137a190-kube-api-access-qrppx\") pod \"neutron-6c6dddcfdd-5kzc7\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.241363 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.453026 4858 generic.go:334] "Generic (PLEG): container finished" podID="1fd10daa-322e-4445-9671-d50447afa9d7" containerID="43e75b7cf74f1bebb6928b8b904df33609c5a0614a452248da75d92c95f07020" exitCode=0 Dec 05 14:15:52 crc kubenswrapper[4858]: I1205 14:15:52.454776 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8ccs" event={"ID":"1fd10daa-322e-4445-9671-d50447afa9d7","Type":"ContainerDied","Data":"43e75b7cf74f1bebb6928b8b904df33609c5a0614a452248da75d92c95f07020"} Dec 05 14:15:53 crc kubenswrapper[4858]: I1205 14:15:53.477041 4858 generic.go:334] "Generic (PLEG): container finished" podID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" containerID="dd0c43c5b3f457cd61d776f4e91369fb64de4b8d51af3fa02bcae90fc1f9ef34" exitCode=0 Dec 05 14:15:53 crc kubenswrapper[4858]: I1205 14:15:53.477299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s8q57" event={"ID":"9f8c113e-5e71-4e4f-a8c7-66caea8a6068","Type":"ContainerDied","Data":"dd0c43c5b3f457cd61d776f4e91369fb64de4b8d51af3fa02bcae90fc1f9ef34"} Dec 05 14:15:53 crc kubenswrapper[4858]: I1205 14:15:53.477955 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:15:53 crc kubenswrapper[4858]: I1205 14:15:53.477981 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:15:53 crc kubenswrapper[4858]: I1205 14:15:53.477961 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:15:53 crc kubenswrapper[4858]: I1205 14:15:53.478102 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.360834 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-794c5555d9-m4bnj"] Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.363670 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.367057 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.367234 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.381779 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-794c5555d9-m4bnj"] Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420535 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-config\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-public-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420620 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnh42\" (UniqueName: \"kubernetes.io/projected/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-kube-api-access-dnh42\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-ovndb-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-httpd-config\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420696 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-combined-ca-bundle\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.420759 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-internal-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.524926 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-internal-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.525027 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-config\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.525057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-public-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.525108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnh42\" (UniqueName: \"kubernetes.io/projected/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-kube-api-access-dnh42\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.525144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-ovndb-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.525170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-httpd-config\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.525193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-combined-ca-bundle\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.539121 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-public-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.539268 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-httpd-config\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.540360 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-config\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.540867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-ovndb-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.541502 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-internal-tls-certs\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.544224 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-combined-ca-bundle\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.564546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnh42\" (UniqueName: \"kubernetes.io/projected/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-kube-api-access-dnh42\") pod \"neutron-794c5555d9-m4bnj\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:56 crc kubenswrapper[4858]: I1205 14:15:56.687253 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.079055 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.096329 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s8q57" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.136656 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data\") pod \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.136697 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-combined-ca-bundle\") pod \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.136749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-logs\") pod \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.136963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-scripts\") pod \"1fd10daa-322e-4445-9671-d50447afa9d7\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-config-data\") pod \"1fd10daa-322e-4445-9671-d50447afa9d7\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137061 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-combined-ca-bundle\") pod \"1fd10daa-322e-4445-9671-d50447afa9d7\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137110 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-fernet-keys\") pod \"1fd10daa-322e-4445-9671-d50447afa9d7\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137198 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrm8\" (UniqueName: \"kubernetes.io/projected/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-kube-api-access-mnrm8\") pod \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137226 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-credential-keys\") pod \"1fd10daa-322e-4445-9671-d50447afa9d7\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-scripts\") pod \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.137305 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4h46\" (UniqueName: \"kubernetes.io/projected/1fd10daa-322e-4445-9671-d50447afa9d7-kube-api-access-l4h46\") pod \"1fd10daa-322e-4445-9671-d50447afa9d7\" (UID: \"1fd10daa-322e-4445-9671-d50447afa9d7\") " Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.153525 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd10daa-322e-4445-9671-d50447afa9d7-kube-api-access-l4h46" (OuterVolumeSpecName: "kube-api-access-l4h46") pod "1fd10daa-322e-4445-9671-d50447afa9d7" (UID: "1fd10daa-322e-4445-9671-d50447afa9d7"). InnerVolumeSpecName "kube-api-access-l4h46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.154744 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-logs" (OuterVolumeSpecName: "logs") pod "9f8c113e-5e71-4e4f-a8c7-66caea8a6068" (UID: "9f8c113e-5e71-4e4f-a8c7-66caea8a6068"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.156589 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1fd10daa-322e-4445-9671-d50447afa9d7" (UID: "1fd10daa-322e-4445-9671-d50447afa9d7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.166224 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1fd10daa-322e-4445-9671-d50447afa9d7" (UID: "1fd10daa-322e-4445-9671-d50447afa9d7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.166248 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-scripts" (OuterVolumeSpecName: "scripts") pod "9f8c113e-5e71-4e4f-a8c7-66caea8a6068" (UID: "9f8c113e-5e71-4e4f-a8c7-66caea8a6068"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.166397 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-kube-api-access-mnrm8" (OuterVolumeSpecName: "kube-api-access-mnrm8") pod "9f8c113e-5e71-4e4f-a8c7-66caea8a6068" (UID: "9f8c113e-5e71-4e4f-a8c7-66caea8a6068"). InnerVolumeSpecName "kube-api-access-mnrm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.187313 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-scripts" (OuterVolumeSpecName: "scripts") pod "1fd10daa-322e-4445-9671-d50447afa9d7" (UID: "1fd10daa-322e-4445-9671-d50447afa9d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267113 4858 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267429 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267442 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4h46\" (UniqueName: \"kubernetes.io/projected/1fd10daa-322e-4445-9671-d50447afa9d7-kube-api-access-l4h46\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267453 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267466 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267474 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.267717 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrm8\" (UniqueName: \"kubernetes.io/projected/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-kube-api-access-mnrm8\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.303731 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-config-data" (OuterVolumeSpecName: "config-data") pod "1fd10daa-322e-4445-9671-d50447afa9d7" (UID: "1fd10daa-322e-4445-9671-d50447afa9d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.314774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fd10daa-322e-4445-9671-d50447afa9d7" (UID: "1fd10daa-322e-4445-9671-d50447afa9d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.352032 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f8c113e-5e71-4e4f-a8c7-66caea8a6068" (UID: "9f8c113e-5e71-4e4f-a8c7-66caea8a6068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.368724 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data" (OuterVolumeSpecName: "config-data") pod "9f8c113e-5e71-4e4f-a8c7-66caea8a6068" (UID: "9f8c113e-5e71-4e4f-a8c7-66caea8a6068"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.368801 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data\") pod \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\" (UID: \"9f8c113e-5e71-4e4f-a8c7-66caea8a6068\") " Dec 05 14:15:57 crc kubenswrapper[4858]: W1205 14:15:57.368981 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9f8c113e-5e71-4e4f-a8c7-66caea8a6068/volumes/kubernetes.io~secret/config-data Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.368992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data" (OuterVolumeSpecName: "config-data") pod "9f8c113e-5e71-4e4f-a8c7-66caea8a6068" (UID: "9f8c113e-5e71-4e4f-a8c7-66caea8a6068"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.369299 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.369319 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8c113e-5e71-4e4f-a8c7-66caea8a6068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.369344 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.369354 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd10daa-322e-4445-9671-d50447afa9d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.615890 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bc8a2e-6170-4c4e-9289-ba46ae2768e8","Type":"ContainerStarted","Data":"c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c"} Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.627055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-glkkv" event={"ID":"9be96efe-970b-4639-8744-3e63a0abfbd6","Type":"ContainerStarted","Data":"8c753ac2a459d60383289055d804ab3eda23dcab1c3ac42fbbdc119023a557fd"} Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.656067 4858 generic.go:334] "Generic (PLEG): container finished" podID="945b1178-6672-45ba-bee9-335d1a2fec5c" containerID="9ec7f3c7d56605d95fb866a8fd13d3cac9f348ecabe1632ff44025d37aced302" exitCode=0 Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.656161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f99f" event={"ID":"945b1178-6672-45ba-bee9-335d1a2fec5c","Type":"ContainerDied","Data":"9ec7f3c7d56605d95fb866a8fd13d3cac9f348ecabe1632ff44025d37aced302"} Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.659103 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h8ccs" event={"ID":"1fd10daa-322e-4445-9671-d50447afa9d7","Type":"ContainerDied","Data":"6e747b92ce414e7770ec8f3ab73a718bd8b1f5df9c3684699aa274ba09c95b8f"} Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.659127 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e747b92ce414e7770ec8f3ab73a718bd8b1f5df9c3684699aa274ba09c95b8f" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.659184 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h8ccs" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.663468 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-glkkv" podStartSLOduration=2.062592531 podStartE2EDuration="59.663450698s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="2025-12-05 14:14:59.525756072 +0000 UTC m=+1108.073354211" lastFinishedPulling="2025-12-05 14:15:57.126614239 +0000 UTC m=+1165.674212378" observedRunningTime="2025-12-05 14:15:57.655262817 +0000 UTC m=+1166.202860966" watchObservedRunningTime="2025-12-05 14:15:57.663450698 +0000 UTC m=+1166.211048837" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.673167 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s8q57" event={"ID":"9f8c113e-5e71-4e4f-a8c7-66caea8a6068","Type":"ContainerDied","Data":"58b91776160cb12cd14d24f2515398b01230a7e687b2fd0ee7d52483aa74028f"} Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.673216 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b91776160cb12cd14d24f2515398b01230a7e687b2fd0ee7d52483aa74028f" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.673290 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s8q57" Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.705899 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-794c5555d9-m4bnj"] Dec 05 14:15:57 crc kubenswrapper[4858]: I1205 14:15:57.794564 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dcc47cdbf-8v5zs"] Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.307121 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7dbd4c4c5b-8skvw"] Dec 05 14:15:58 crc kubenswrapper[4858]: E1205 14:15:58.310893 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" containerName="placement-db-sync" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.310918 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" containerName="placement-db-sync" Dec 05 14:15:58 crc kubenswrapper[4858]: E1205 14:15:58.310942 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd10daa-322e-4445-9671-d50447afa9d7" containerName="keystone-bootstrap" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.310950 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd10daa-322e-4445-9671-d50447afa9d7" containerName="keystone-bootstrap" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.311126 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd10daa-322e-4445-9671-d50447afa9d7" containerName="keystone-bootstrap" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.311147 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" containerName="placement-db-sync" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.311737 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.323595 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.324117 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.324356 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.324546 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qbtl5" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.327633 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.328004 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.342416 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7dbd4c4c5b-8skvw"] Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.408059 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6dddcfdd-5kzc7"] Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410563 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-internal-tls-certs\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410623 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-combined-ca-bundle\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-fernet-keys\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410676 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplhn\" (UniqueName: \"kubernetes.io/projected/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-kube-api-access-gplhn\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410702 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-credential-keys\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-config-data\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-scripts\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.410788 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-public-tls-certs\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.512669 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplhn\" (UniqueName: \"kubernetes.io/projected/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-kube-api-access-gplhn\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.512747 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-credential-keys\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.512782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-config-data\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.512890 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-scripts\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.512921 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-public-tls-certs\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.512987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-internal-tls-certs\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.513040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-combined-ca-bundle\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.513076 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-fernet-keys\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.515581 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7578ddfc8d-65llf"] Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.516813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-scripts\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.521452 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-combined-ca-bundle\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.524843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-config-data\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.526354 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.527344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-fernet-keys\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.532391 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-internal-tls-certs\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.550004 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-credential-keys\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.550559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-public-tls-certs\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.550867 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-75p2t" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.550983 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.551605 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.551653 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.551952 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.569251 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7578ddfc8d-65llf"] Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.603571 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplhn\" (UniqueName: \"kubernetes.io/projected/7ce68600-9f0d-4e3d-98cb-1d1a69d86f06-kube-api-access-gplhn\") pod \"keystone-7dbd4c4c5b-8skvw\" (UID: \"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06\") " pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.630020 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.673422 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.710629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" event={"ID":"cd9d9950-37cb-4d6d-9d5e-4180e848883f","Type":"ContainerStarted","Data":"b527d9b54ca99457dfe7b1d843aee02caea605b3d5008b015ccac272bdf0c0ef"} Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.712321 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6dddcfdd-5kzc7" event={"ID":"a08a4143-92f7-4cc4-a600-a5449137a190","Type":"ContainerStarted","Data":"e5a1cb8b2894fc256fef2cb14c3069b5a02710427d4dc8e83f85f132c8b1a463"} Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.714539 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794c5555d9-m4bnj" event={"ID":"3b098e12-08af-4c9f-8c3c-851b91c2e8a6","Type":"ContainerStarted","Data":"e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f"} Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.714580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794c5555d9-m4bnj" event={"ID":"3b098e12-08af-4c9f-8c3c-851b91c2e8a6","Type":"ContainerStarted","Data":"7f74b9034cc927e1c59b431f4bd707d0c3b9008c6aa46a94482eb3456f19048f"} Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7db9j\" (UniqueName: \"kubernetes.io/projected/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-kube-api-access-7db9j\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-combined-ca-bundle\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-logs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-scripts\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716420 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-internal-tls-certs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-public-tls-certs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.716486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-config-data\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.828784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-internal-tls-certs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.828879 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-public-tls-certs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.828907 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-config-data\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.828949 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7db9j\" (UniqueName: \"kubernetes.io/projected/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-kube-api-access-7db9j\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.828969 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-combined-ca-bundle\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.828996 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-logs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.829025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-scripts\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.832271 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-logs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.842447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-public-tls-certs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.843171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-combined-ca-bundle\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.852344 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-scripts\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.854136 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-internal-tls-certs\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.877941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-config-data\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.919666 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7db9j\" (UniqueName: \"kubernetes.io/projected/e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72-kube-api-access-7db9j\") pod \"placement-7578ddfc8d-65llf\" (UID: \"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72\") " pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:58 crc kubenswrapper[4858]: I1205 14:15:58.964779 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.213519 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.447283 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7dbd4c4c5b-8skvw"] Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.741230 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f99f" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.742405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7dbd4c4c5b-8skvw" event={"ID":"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06","Type":"ContainerStarted","Data":"396e5c27efbf04d9e3411d023728759b0ad2eddf1ef6aa3789f897d5b820bd5a"} Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.752400 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794c5555d9-m4bnj" event={"ID":"3b098e12-08af-4c9f-8c3c-851b91c2e8a6","Type":"ContainerStarted","Data":"1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351"} Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.753535 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.805218 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-794c5555d9-m4bnj" podStartSLOduration=3.805196334 podStartE2EDuration="3.805196334s" podCreationTimestamp="2025-12-05 14:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:15:59.793757525 +0000 UTC m=+1168.341355674" watchObservedRunningTime="2025-12-05 14:15:59.805196334 +0000 UTC m=+1168.352794473" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.810747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" event={"ID":"cd9d9950-37cb-4d6d-9d5e-4180e848883f","Type":"ContainerDied","Data":"ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf"} Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.810426 4858 generic.go:334] "Generic (PLEG): container finished" podID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerID="ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf" exitCode=0 Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.850792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-5f99f" event={"ID":"945b1178-6672-45ba-bee9-335d1a2fec5c","Type":"ContainerDied","Data":"4df9e7f00d27d45ad81398e956714580d4421b7baa214207d1b858a8bac5d317"} Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.850856 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df9e7f00d27d45ad81398e956714580d4421b7baa214207d1b858a8bac5d317" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.850937 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-5f99f" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.857727 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-combined-ca-bundle\") pod \"945b1178-6672-45ba-bee9-335d1a2fec5c\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.857784 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-db-sync-config-data\") pod \"945b1178-6672-45ba-bee9-335d1a2fec5c\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.857897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbgp6\" (UniqueName: \"kubernetes.io/projected/945b1178-6672-45ba-bee9-335d1a2fec5c-kube-api-access-qbgp6\") pod \"945b1178-6672-45ba-bee9-335d1a2fec5c\" (UID: \"945b1178-6672-45ba-bee9-335d1a2fec5c\") " Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.875762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "945b1178-6672-45ba-bee9-335d1a2fec5c" (UID: "945b1178-6672-45ba-bee9-335d1a2fec5c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.886058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945b1178-6672-45ba-bee9-335d1a2fec5c-kube-api-access-qbgp6" (OuterVolumeSpecName: "kube-api-access-qbgp6") pod "945b1178-6672-45ba-bee9-335d1a2fec5c" (UID: "945b1178-6672-45ba-bee9-335d1a2fec5c"). InnerVolumeSpecName "kube-api-access-qbgp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.895128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fbkbh" event={"ID":"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd","Type":"ContainerStarted","Data":"391ba69855cd14c436b0eec6786e635e6fe96366f292095edb7bfe314cefed77"} Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.917894 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-fbkbh" podStartSLOduration=4.848246895 podStartE2EDuration="1m1.917874003s" podCreationTimestamp="2025-12-05 14:14:58 +0000 UTC" firstStartedPulling="2025-12-05 14:15:00.05475918 +0000 UTC m=+1108.602357309" lastFinishedPulling="2025-12-05 14:15:57.124386278 +0000 UTC m=+1165.671984417" observedRunningTime="2025-12-05 14:15:59.912984502 +0000 UTC m=+1168.460582641" watchObservedRunningTime="2025-12-05 14:15:59.917874003 +0000 UTC m=+1168.465472142" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.963003 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.963035 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbgp6\" (UniqueName: \"kubernetes.io/projected/945b1178-6672-45ba-bee9-335d1a2fec5c-kube-api-access-qbgp6\") on node \"crc\" DevicePath \"\"" Dec 05 14:15:59 crc kubenswrapper[4858]: I1205 14:15:59.983810 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "945b1178-6672-45ba-bee9-335d1a2fec5c" (UID: "945b1178-6672-45ba-bee9-335d1a2fec5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.040218 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6dddcfdd-5kzc7" event={"ID":"a08a4143-92f7-4cc4-a600-a5449137a190","Type":"ContainerStarted","Data":"b778d8fc7b39e5648781ead32eb7b0aca9b90862151db9ab77351a6069a6f47a"} Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.065138 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945b1178-6672-45ba-bee9-335d1a2fec5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.395915 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7578ddfc8d-65llf"] Dec 05 14:16:00 crc kubenswrapper[4858]: W1205 14:16:00.419229 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode17d6d53_6ef7_4cd7_b7e8_8ed149e97c72.slice/crio-141a0cf613e937e25de424c332adb704ef83008ac939be970304077e6856af2f WatchSource:0}: Error finding container 141a0cf613e937e25de424c332adb704ef83008ac939be970304077e6856af2f: Status 404 returned error can't find the container with id 141a0cf613e937e25de424c332adb704ef83008ac939be970304077e6856af2f Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.940541 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.941060 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.953570 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7578ddfc8d-65llf" event={"ID":"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72","Type":"ContainerStarted","Data":"73b27274eae866c92844786a30d378ccfe6c92386785573b0b89e33d390ecc7a"} Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.953605 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7578ddfc8d-65llf" event={"ID":"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72","Type":"ContainerStarted","Data":"141a0cf613e937e25de424c332adb704ef83008ac939be970304077e6856af2f"} Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.963904 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" event={"ID":"cd9d9950-37cb-4d6d-9d5e-4180e848883f","Type":"ContainerStarted","Data":"36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640"} Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.963940 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.970851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6dddcfdd-5kzc7" event={"ID":"a08a4143-92f7-4cc4-a600-a5449137a190","Type":"ContainerStarted","Data":"871fb8f5ccdeaffdbec27df82e46b0ee2ee341c1a450ef81050b37d03ebdf571"} Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.971726 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.982845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7dbd4c4c5b-8skvw" event={"ID":"7ce68600-9f0d-4e3d-98cb-1d1a69d86f06","Type":"ContainerStarted","Data":"1938d126912440df8a18017eaf7b407d8d36d81f87ef9dd5da4d938e7772342b"} Dec 05 14:16:00 crc kubenswrapper[4858]: I1205 14:16:00.982994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.031797 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" podStartSLOduration=10.031767843 podStartE2EDuration="10.031767843s" podCreationTimestamp="2025-12-05 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:01.027590059 +0000 UTC m=+1169.575188198" watchObservedRunningTime="2025-12-05 14:16:01.031767843 +0000 UTC m=+1169.579365982" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.128645 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c6dddcfdd-5kzc7" podStartSLOduration=10.128627898 podStartE2EDuration="10.128627898s" podCreationTimestamp="2025-12-05 14:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:01.127446936 +0000 UTC m=+1169.675045065" watchObservedRunningTime="2025-12-05 14:16:01.128627898 +0000 UTC m=+1169.676226037" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.167236 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-755dbf8d4-xrmmq"] Dec 05 14:16:01 crc kubenswrapper[4858]: E1205 14:16:01.167634 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945b1178-6672-45ba-bee9-335d1a2fec5c" containerName="barbican-db-sync" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.167650 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="945b1178-6672-45ba-bee9-335d1a2fec5c" containerName="barbican-db-sync" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.167864 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="945b1178-6672-45ba-bee9-335d1a2fec5c" containerName="barbican-db-sync" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.168756 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.177965 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-phngb" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.178242 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.178377 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.212840 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77665dc6c-62v92"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.214793 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.233349 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.252082 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-755dbf8d4-xrmmq"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.253765 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7dbd4c4c5b-8skvw" podStartSLOduration=3.253743074 podStartE2EDuration="3.253743074s" podCreationTimestamp="2025-12-05 14:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:01.16021523 +0000 UTC m=+1169.707813369" watchObservedRunningTime="2025-12-05 14:16:01.253743074 +0000 UTC m=+1169.801341203" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.293226 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77665dc6c-62v92"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300285 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-config-data\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300352 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vp89\" (UniqueName: \"kubernetes.io/projected/dd400785-86ec-48a7-a696-22fd1b66ed5b-kube-api-access-7vp89\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300390 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2kx5\" (UniqueName: \"kubernetes.io/projected/c4093095-9772-4106-bf1b-8bc5a556e460-kube-api-access-d2kx5\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300426 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-config-data-custom\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300475 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd400785-86ec-48a7-a696-22fd1b66ed5b-logs\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4093095-9772-4106-bf1b-8bc5a556e460-logs\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300562 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-combined-ca-bundle\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-combined-ca-bundle\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300642 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-config-data\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.300707 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-config-data-custom\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.401767 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-config-data-custom\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402184 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-config-data\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402372 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vp89\" (UniqueName: \"kubernetes.io/projected/dd400785-86ec-48a7-a696-22fd1b66ed5b-kube-api-access-7vp89\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2kx5\" (UniqueName: \"kubernetes.io/projected/c4093095-9772-4106-bf1b-8bc5a556e460-kube-api-access-d2kx5\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402536 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-config-data-custom\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402619 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd400785-86ec-48a7-a696-22fd1b66ed5b-logs\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402685 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4093095-9772-4106-bf1b-8bc5a556e460-logs\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402776 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-combined-ca-bundle\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.402904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-combined-ca-bundle\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.403023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-config-data\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.406673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4093095-9772-4106-bf1b-8bc5a556e460-logs\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.406957 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd400785-86ec-48a7-a696-22fd1b66ed5b-logs\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.412661 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-combined-ca-bundle\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.413504 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-config-data\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.414133 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-config-data-custom\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.415703 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-config-data-custom\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.416875 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd400785-86ec-48a7-a696-22fd1b66ed5b-combined-ca-bundle\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.421321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4093095-9772-4106-bf1b-8bc5a556e460-config-data\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.435593 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dcc47cdbf-8v5zs"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.503788 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vp89\" (UniqueName: \"kubernetes.io/projected/dd400785-86ec-48a7-a696-22fd1b66ed5b-kube-api-access-7vp89\") pod \"barbican-worker-77665dc6c-62v92\" (UID: \"dd400785-86ec-48a7-a696-22fd1b66ed5b\") " pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.558081 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2kx5\" (UniqueName: \"kubernetes.io/projected/c4093095-9772-4106-bf1b-8bc5a556e460-kube-api-access-d2kx5\") pod \"barbican-keystone-listener-755dbf8d4-xrmmq\" (UID: \"c4093095-9772-4106-bf1b-8bc5a556e460\") " pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.571772 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77665dc6c-62v92" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.655582 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697d8bbbf9-dvsmf"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.658237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.721712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697d8bbbf9-dvsmf"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.771403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-nb\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.771447 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-config\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.771506 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-swift-storage-0\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.771554 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-sb\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.775881 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d64554cfb-x842g"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.777391 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.778041 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-svc\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.778101 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qw7\" (UniqueName: \"kubernetes.io/projected/545af5cd-079a-4dab-a389-163d5560a8f5-kube-api-access-h7qw7\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.798171 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.817003 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.847761 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d64554cfb-x842g"] Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881392 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-nb\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881439 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-combined-ca-bundle\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881460 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-config\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881496 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-swift-storage-0\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x94rh\" (UniqueName: \"kubernetes.io/projected/f405006f-5489-4c10-916b-c1118b7a3bd7-kube-api-access-x94rh\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881539 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f405006f-5489-4c10-916b-c1118b7a3bd7-logs\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881556 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-sb\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881578 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-svc\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qw7\" (UniqueName: \"kubernetes.io/projected/545af5cd-079a-4dab-a389-163d5560a8f5-kube-api-access-h7qw7\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881613 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data-custom\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.881661 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.882547 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-nb\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.883168 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-config\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.883648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-swift-storage-0\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.886680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-sb\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.887289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-svc\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.947241 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qw7\" (UniqueName: \"kubernetes.io/projected/545af5cd-079a-4dab-a389-163d5560a8f5-kube-api-access-h7qw7\") pod \"dnsmasq-dns-697d8bbbf9-dvsmf\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.988189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.988284 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-combined-ca-bundle\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.988342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x94rh\" (UniqueName: \"kubernetes.io/projected/f405006f-5489-4c10-916b-c1118b7a3bd7-kube-api-access-x94rh\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.988368 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f405006f-5489-4c10-916b-c1118b7a3bd7-logs\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.988400 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data-custom\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.992060 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f405006f-5489-4c10-916b-c1118b7a3bd7-logs\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:01 crc kubenswrapper[4858]: I1205 14:16:01.995816 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.002142 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data-custom\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.010469 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-combined-ca-bundle\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.041236 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x94rh\" (UniqueName: \"kubernetes.io/projected/f405006f-5489-4c10-916b-c1118b7a3bd7-kube-api-access-x94rh\") pod \"barbican-api-6d64554cfb-x842g\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.075392 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7578ddfc8d-65llf" event={"ID":"e17d6d53-6ef7-4cd7-b7e8-8ed149e97c72","Type":"ContainerStarted","Data":"0d652601b3b1c9b2f0f38fcc8385fb0f3d1ac6c259d580c1dd2aee4f93ed98c7"} Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.083075 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.083105 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.117340 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.140305 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.513034 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7578ddfc8d-65llf" podStartSLOduration=4.513016393 podStartE2EDuration="4.513016393s" podCreationTimestamp="2025-12-05 14:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:02.116990628 +0000 UTC m=+1170.664588767" watchObservedRunningTime="2025-12-05 14:16:02.513016393 +0000 UTC m=+1171.060614542" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.524257 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77665dc6c-62v92"] Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.741263 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.741667 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:16:02 crc kubenswrapper[4858]: W1205 14:16:02.859482 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4093095_9772_4106_bf1b_8bc5a556e460.slice/crio-e68204b368e238bc783d0a0ba9c3ff85467200c40f3d6716ac5c3f7f2f294e6d WatchSource:0}: Error finding container e68204b368e238bc783d0a0ba9c3ff85467200c40f3d6716ac5c3f7f2f294e6d: Status 404 returned error can't find the container with id e68204b368e238bc783d0a0ba9c3ff85467200c40f3d6716ac5c3f7f2f294e6d Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.863874 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-755dbf8d4-xrmmq"] Dec 05 14:16:02 crc kubenswrapper[4858]: I1205 14:16:02.878706 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697d8bbbf9-dvsmf"] Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.102895 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77665dc6c-62v92" event={"ID":"dd400785-86ec-48a7-a696-22fd1b66ed5b","Type":"ContainerStarted","Data":"fd4701f54a0ca9b1b701a68cefe924bb25722f81103d4c9d00663d1510a04ee8"} Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.109190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" event={"ID":"545af5cd-079a-4dab-a389-163d5560a8f5","Type":"ContainerStarted","Data":"d35e5204220a6e14eda81ec2825241445119dc5a27176c7db53121c02488fd70"} Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.126838 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerName="dnsmasq-dns" containerID="cri-o://36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640" gracePeriod=10 Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.126926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" event={"ID":"c4093095-9772-4106-bf1b-8bc5a556e460","Type":"ContainerStarted","Data":"e68204b368e238bc783d0a0ba9c3ff85467200c40f3d6716ac5c3f7f2f294e6d"} Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.133312 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d64554cfb-x842g"] Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.150722 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 14:16:03 crc kubenswrapper[4858]: I1205 14:16:03.823450 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.048234 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.112675 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-nb\") pod \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.112785 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-config\") pod \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.112873 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-svc\") pod \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.112948 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-swift-storage-0\") pod \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.226621 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-sb\") pod \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.226760 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp45v\" (UniqueName: \"kubernetes.io/projected/cd9d9950-37cb-4d6d-9d5e-4180e848883f-kube-api-access-sp45v\") pod \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\" (UID: \"cd9d9950-37cb-4d6d-9d5e-4180e848883f\") " Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.228256 4858 generic.go:334] "Generic (PLEG): container finished" podID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerID="36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640" exitCode=0 Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.228385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" event={"ID":"cd9d9950-37cb-4d6d-9d5e-4180e848883f","Type":"ContainerDied","Data":"36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640"} Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.228435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" event={"ID":"cd9d9950-37cb-4d6d-9d5e-4180e848883f","Type":"ContainerDied","Data":"b527d9b54ca99457dfe7b1d843aee02caea605b3d5008b015ccac272bdf0c0ef"} Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.228452 4858 scope.go:117] "RemoveContainer" containerID="36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.228657 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcc47cdbf-8v5zs" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.264803 4858 generic.go:334] "Generic (PLEG): container finished" podID="545af5cd-079a-4dab-a389-163d5560a8f5" containerID="7a7880b1c9dc419401b73d629461dab77a7cdb75438300e63daa7a00ffe67189" exitCode=0 Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.264919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" event={"ID":"545af5cd-079a-4dab-a389-163d5560a8f5","Type":"ContainerDied","Data":"7a7880b1c9dc419401b73d629461dab77a7cdb75438300e63daa7a00ffe67189"} Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.291113 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9d9950-37cb-4d6d-9d5e-4180e848883f-kube-api-access-sp45v" (OuterVolumeSpecName: "kube-api-access-sp45v") pod "cd9d9950-37cb-4d6d-9d5e-4180e848883f" (UID: "cd9d9950-37cb-4d6d-9d5e-4180e848883f"). InnerVolumeSpecName "kube-api-access-sp45v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.295770 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd9d9950-37cb-4d6d-9d5e-4180e848883f" (UID: "cd9d9950-37cb-4d6d-9d5e-4180e848883f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.309882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-config" (OuterVolumeSpecName: "config") pod "cd9d9950-37cb-4d6d-9d5e-4180e848883f" (UID: "cd9d9950-37cb-4d6d-9d5e-4180e848883f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.312870 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64554cfb-x842g" event={"ID":"f405006f-5489-4c10-916b-c1118b7a3bd7","Type":"ContainerStarted","Data":"fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd"} Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.312924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64554cfb-x842g" event={"ID":"f405006f-5489-4c10-916b-c1118b7a3bd7","Type":"ContainerStarted","Data":"0ad509bfb55fd04c3d4317366fb075de926b1e869dc21f57f6346956020124df"} Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.323703 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd9d9950-37cb-4d6d-9d5e-4180e848883f" (UID: "cd9d9950-37cb-4d6d-9d5e-4180e848883f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.335019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd9d9950-37cb-4d6d-9d5e-4180e848883f" (UID: "cd9d9950-37cb-4d6d-9d5e-4180e848883f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.337471 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp45v\" (UniqueName: \"kubernetes.io/projected/cd9d9950-37cb-4d6d-9d5e-4180e848883f-kube-api-access-sp45v\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.338171 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.338295 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.338354 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.338410 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.351264 4858 scope.go:117] "RemoveContainer" containerID="ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.410750 4858 scope.go:117] "RemoveContainer" containerID="36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640" Dec 05 14:16:04 crc kubenswrapper[4858]: E1205 14:16:04.411413 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640\": container with ID starting with 36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640 not found: ID does not exist" containerID="36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.411452 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640"} err="failed to get container status \"36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640\": rpc error: code = NotFound desc = could not find container \"36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640\": container with ID starting with 36bf142cf908c8a27de750670de0d0dfcc95c1dca9bd9374a7dc5d0f3ad77640 not found: ID does not exist" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.411478 4858 scope.go:117] "RemoveContainer" containerID="ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf" Dec 05 14:16:04 crc kubenswrapper[4858]: E1205 14:16:04.411759 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf\": container with ID starting with ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf not found: ID does not exist" containerID="ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.411782 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf"} err="failed to get container status \"ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf\": rpc error: code = NotFound desc = could not find container \"ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf\": container with ID starting with ca9f89628278f3b0cbe27ced9e7b0acf9c28214d11771b670e73dc743f4a53bf not found: ID does not exist" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.430056 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd9d9950-37cb-4d6d-9d5e-4180e848883f" (UID: "cd9d9950-37cb-4d6d-9d5e-4180e848883f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.442612 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9d9950-37cb-4d6d-9d5e-4180e848883f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.593528 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dcc47cdbf-8v5zs"] Dec 05 14:16:04 crc kubenswrapper[4858]: I1205 14:16:04.619701 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dcc47cdbf-8v5zs"] Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.334028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64554cfb-x842g" event={"ID":"f405006f-5489-4c10-916b-c1118b7a3bd7","Type":"ContainerStarted","Data":"596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b"} Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.334630 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.334665 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.350186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" event={"ID":"545af5cd-079a-4dab-a389-163d5560a8f5","Type":"ContainerStarted","Data":"0a93d7fdb619e704b967838726a55e7605b8aa9fee2d39de83fac2d1e4444f63"} Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.351432 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.354894 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d64554cfb-x842g" podStartSLOduration=4.354881879 podStartE2EDuration="4.354881879s" podCreationTimestamp="2025-12-05 14:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:05.352054982 +0000 UTC m=+1173.899653141" watchObservedRunningTime="2025-12-05 14:16:05.354881879 +0000 UTC m=+1173.902480018" Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.398284 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" podStartSLOduration=4.398258164 podStartE2EDuration="4.398258164s" podCreationTimestamp="2025-12-05 14:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:05.386032901 +0000 UTC m=+1173.933631060" watchObservedRunningTime="2025-12-05 14:16:05.398258164 +0000 UTC m=+1173.945856303" Dec 05 14:16:05 crc kubenswrapper[4858]: I1205 14:16:05.913884 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" path="/var/lib/kubelet/pods/cd9d9950-37cb-4d6d-9d5e-4180e848883f/volumes" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.402466 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8688cb6d6-l5l7t"] Dec 05 14:16:06 crc kubenswrapper[4858]: E1205 14:16:06.402935 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerName="dnsmasq-dns" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.402953 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerName="dnsmasq-dns" Dec 05 14:16:06 crc kubenswrapper[4858]: E1205 14:16:06.402971 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerName="init" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.402978 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerName="init" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.403197 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d9950-37cb-4d6d-9d5e-4180e848883f" containerName="dnsmasq-dns" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.407784 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.411878 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.421245 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.435373 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8688cb6d6-l5l7t"] Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.487230 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-config-data\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.487316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-config-data-custom\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.487351 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4885ff0b-6266-49b4-a554-82b6504f932d-logs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.487373 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-public-tls-certs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.487391 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnk4\" (UniqueName: \"kubernetes.io/projected/4885ff0b-6266-49b4-a554-82b6504f932d-kube-api-access-2bnk4\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.489817 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-combined-ca-bundle\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.490003 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-internal-tls-certs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-config-data\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-config-data-custom\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4885ff0b-6266-49b4-a554-82b6504f932d-logs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-public-tls-certs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592296 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnk4\" (UniqueName: \"kubernetes.io/projected/4885ff0b-6266-49b4-a554-82b6504f932d-kube-api-access-2bnk4\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-combined-ca-bundle\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.592358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-internal-tls-certs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.593467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4885ff0b-6266-49b4-a554-82b6504f932d-logs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.599037 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-public-tls-certs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.602033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-config-data\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.603033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-config-data-custom\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.604633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-internal-tls-certs\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.625492 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4885ff0b-6266-49b4-a554-82b6504f932d-combined-ca-bundle\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.631163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnk4\" (UniqueName: \"kubernetes.io/projected/4885ff0b-6266-49b4-a554-82b6504f932d-kube-api-access-2bnk4\") pod \"barbican-api-8688cb6d6-l5l7t\" (UID: \"4885ff0b-6266-49b4-a554-82b6504f932d\") " pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:06 crc kubenswrapper[4858]: I1205 14:16:06.736316 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:08 crc kubenswrapper[4858]: I1205 14:16:08.350348 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8688cb6d6-l5l7t"] Dec 05 14:16:08 crc kubenswrapper[4858]: I1205 14:16:08.394299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" event={"ID":"c4093095-9772-4106-bf1b-8bc5a556e460","Type":"ContainerStarted","Data":"60752f56e35d5299fdb079eb4bfb2746ee11f15b73da95157fe387c7eb471ac8"} Dec 05 14:16:08 crc kubenswrapper[4858]: I1205 14:16:08.399769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77665dc6c-62v92" event={"ID":"dd400785-86ec-48a7-a696-22fd1b66ed5b","Type":"ContainerStarted","Data":"bdc84d625a5aa320a0cd8dce05428b6317a13d84be3cb9d7309b1edcb557f023"} Dec 05 14:16:08 crc kubenswrapper[4858]: I1205 14:16:08.407027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8688cb6d6-l5l7t" event={"ID":"4885ff0b-6266-49b4-a554-82b6504f932d","Type":"ContainerStarted","Data":"1ff7cf859160f19fa355806509925432367675e59b98a60aaeffa22ef344ef23"} Dec 05 14:16:08 crc kubenswrapper[4858]: I1205 14:16:08.654043 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:16:08 crc kubenswrapper[4858]: I1205 14:16:08.963355 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.420281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77665dc6c-62v92" event={"ID":"dd400785-86ec-48a7-a696-22fd1b66ed5b","Type":"ContainerStarted","Data":"10a0f5247856f1b641bcf67fc2adc125b3f63b1d833ac5a957da0e91df77b603"} Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.424264 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8688cb6d6-l5l7t" event={"ID":"4885ff0b-6266-49b4-a554-82b6504f932d","Type":"ContainerStarted","Data":"3eadec36c89d878c78903ce8297a1bff0ec8389aaec337db8f7a5a3257cf98b4"} Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.424299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8688cb6d6-l5l7t" event={"ID":"4885ff0b-6266-49b4-a554-82b6504f932d","Type":"ContainerStarted","Data":"23d0ddc51c00ef0c01f9c879789550bee3a304576d5a116369aa3ae3f1ffb627"} Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.424872 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.424898 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.427438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" event={"ID":"c4093095-9772-4106-bf1b-8bc5a556e460","Type":"ContainerStarted","Data":"e81b48245e8d5a0d5dfa63a9ff5dafc53cbed8e11992269fe7d80f83656993be"} Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.444409 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77665dc6c-62v92" podStartSLOduration=3.448820256 podStartE2EDuration="8.444391367s" podCreationTimestamp="2025-12-05 14:16:01 +0000 UTC" firstStartedPulling="2025-12-05 14:16:02.558765472 +0000 UTC m=+1171.106363611" lastFinishedPulling="2025-12-05 14:16:07.554336583 +0000 UTC m=+1176.101934722" observedRunningTime="2025-12-05 14:16:09.439480303 +0000 UTC m=+1177.987078462" watchObservedRunningTime="2025-12-05 14:16:09.444391367 +0000 UTC m=+1177.991989506" Dec 05 14:16:09 crc kubenswrapper[4858]: I1205 14:16:09.475884 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-755dbf8d4-xrmmq" podStartSLOduration=3.7825125 podStartE2EDuration="8.475860477s" podCreationTimestamp="2025-12-05 14:16:01 +0000 UTC" firstStartedPulling="2025-12-05 14:16:02.917253322 +0000 UTC m=+1171.464851461" lastFinishedPulling="2025-12-05 14:16:07.610601299 +0000 UTC m=+1176.158199438" observedRunningTime="2025-12-05 14:16:09.46537416 +0000 UTC m=+1178.012972299" watchObservedRunningTime="2025-12-05 14:16:09.475860477 +0000 UTC m=+1178.023458606" Dec 05 14:16:10 crc kubenswrapper[4858]: I1205 14:16:10.438925 4858 generic.go:334] "Generic (PLEG): container finished" podID="9be96efe-970b-4639-8744-3e63a0abfbd6" containerID="8c753ac2a459d60383289055d804ab3eda23dcab1c3ac42fbbdc119023a557fd" exitCode=0 Dec 05 14:16:10 crc kubenswrapper[4858]: I1205 14:16:10.439285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-glkkv" event={"ID":"9be96efe-970b-4639-8744-3e63a0abfbd6","Type":"ContainerDied","Data":"8c753ac2a459d60383289055d804ab3eda23dcab1c3ac42fbbdc119023a557fd"} Dec 05 14:16:10 crc kubenswrapper[4858]: I1205 14:16:10.462237 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8688cb6d6-l5l7t" podStartSLOduration=4.462220912 podStartE2EDuration="4.462220912s" podCreationTimestamp="2025-12-05 14:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:09.495152303 +0000 UTC m=+1178.042750442" watchObservedRunningTime="2025-12-05 14:16:10.462220912 +0000 UTC m=+1179.009819051" Dec 05 14:16:12 crc kubenswrapper[4858]: I1205 14:16:12.118989 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:12 crc kubenswrapper[4858]: I1205 14:16:12.172652 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f889b9ff-thbrt"] Dec 05 14:16:12 crc kubenswrapper[4858]: I1205 14:16:12.172901 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="dnsmasq-dns" containerID="cri-o://03b65ed9d886af294a30dc7ae883fd1c935d17612c5da3b40714a50a31b4c17d" gracePeriod=10 Dec 05 14:16:12 crc kubenswrapper[4858]: I1205 14:16:12.461832 4858 generic.go:334] "Generic (PLEG): container finished" podID="303565f4-49fa-4a41-9884-c801202229cb" containerID="03b65ed9d886af294a30dc7ae883fd1c935d17612c5da3b40714a50a31b4c17d" exitCode=0 Dec 05 14:16:12 crc kubenswrapper[4858]: I1205 14:16:12.461872 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" event={"ID":"303565f4-49fa-4a41-9884-c801202229cb","Type":"ContainerDied","Data":"03b65ed9d886af294a30dc7ae883fd1c935d17612c5da3b40714a50a31b4c17d"} Dec 05 14:16:14 crc kubenswrapper[4858]: I1205 14:16:14.427129 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 05 14:16:14 crc kubenswrapper[4858]: I1205 14:16:14.491000 4858 generic.go:334] "Generic (PLEG): container finished" podID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" containerID="391ba69855cd14c436b0eec6786e635e6fe96366f292095edb7bfe314cefed77" exitCode=0 Dec 05 14:16:14 crc kubenswrapper[4858]: I1205 14:16:14.491044 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fbkbh" event={"ID":"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd","Type":"ContainerDied","Data":"391ba69855cd14c436b0eec6786e635e6fe96366f292095edb7bfe314cefed77"} Dec 05 14:16:15 crc kubenswrapper[4858]: I1205 14:16:15.501928 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerID="fff9443d5f06d90d5763eef075c535599704e5911c68b84cf3f3a9a6ddd9ab9d" exitCode=137 Dec 05 14:16:15 crc kubenswrapper[4858]: I1205 14:16:15.502048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575f67464c-nsrld" event={"ID":"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c","Type":"ContainerDied","Data":"fff9443d5f06d90d5763eef075c535599704e5911c68b84cf3f3a9a6ddd9ab9d"} Dec 05 14:16:16 crc kubenswrapper[4858]: I1205 14:16:16.069417 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:16 crc kubenswrapper[4858]: I1205 14:16:16.418440 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:16 crc kubenswrapper[4858]: I1205 14:16:16.539878 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerID="2d72059b626402160d5fe6efc207e1fdabb4d9e3e9be37933836671ce03f6f1d" exitCode=137 Dec 05 14:16:16 crc kubenswrapper[4858]: I1205 14:16:16.539945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575f67464c-nsrld" event={"ID":"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c","Type":"ContainerDied","Data":"2d72059b626402160d5fe6efc207e1fdabb4d9e3e9be37933836671ce03f6f1d"} Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.654028 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.655020 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.655738 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"61f7dd3bef7baaad01301f499bc946fc6b7f67a00416e4a5dc1f0bf9d190b0df"} pod="openstack/horizon-66fd8d549b-n87dk" containerMessage="Container horizon failed startup probe, will be restarted" Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.655775 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" containerID="cri-o://61f7dd3bef7baaad01301f499bc946fc6b7f67a00416e4a5dc1f0bf9d190b0df" gracePeriod=30 Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.960113 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.960185 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.960912 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"22d0a7a46fc3ae2b7828a4d3f6f59aa262bf8ed16cb09331868098f002150ec0"} pod="openstack/horizon-66fb787db8-jqwt8" containerMessage="Container horizon failed startup probe, will be restarted" Dec 05 14:16:18 crc kubenswrapper[4858]: I1205 14:16:18.960960 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" containerID="cri-o://22d0a7a46fc3ae2b7828a4d3f6f59aa262bf8ed16cb09331868098f002150ec0" gracePeriod=30 Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.243518 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.263445 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-glkkv" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.332511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-db-sync-config-data\") pod \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.332886 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-scripts\") pod \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.332916 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-combined-ca-bundle\") pod \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.333059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-etc-machine-id\") pod \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.333095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f298f\" (UniqueName: \"kubernetes.io/projected/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-kube-api-access-f298f\") pod \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.333144 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-config-data\") pod \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\" (UID: \"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.334245 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" (UID: "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.350627 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" (UID: "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.351304 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-scripts" (OuterVolumeSpecName: "scripts") pod "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" (UID: "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.366329 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-kube-api-access-f298f" (OuterVolumeSpecName: "kube-api-access-f298f") pod "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" (UID: "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd"). InnerVolumeSpecName "kube-api-access-f298f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.402090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" (UID: "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.427520 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.435256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-config-data\") pod \"9be96efe-970b-4639-8744-3e63a0abfbd6\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.435375 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-combined-ca-bundle\") pod \"9be96efe-970b-4639-8744-3e63a0abfbd6\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.435461 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klfr6\" (UniqueName: \"kubernetes.io/projected/9be96efe-970b-4639-8744-3e63a0abfbd6-kube-api-access-klfr6\") pod \"9be96efe-970b-4639-8744-3e63a0abfbd6\" (UID: \"9be96efe-970b-4639-8744-3e63a0abfbd6\") " Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.436062 4858 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.436085 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.436099 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.436110 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.436121 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f298f\" (UniqueName: \"kubernetes.io/projected/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-kube-api-access-f298f\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.453181 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be96efe-970b-4639-8744-3e63a0abfbd6-kube-api-access-klfr6" (OuterVolumeSpecName: "kube-api-access-klfr6") pod "9be96efe-970b-4639-8744-3e63a0abfbd6" (UID: "9be96efe-970b-4639-8744-3e63a0abfbd6"). InnerVolumeSpecName "kube-api-access-klfr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.453308 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-config-data" (OuterVolumeSpecName: "config-data") pod "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" (UID: "aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.497919 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be96efe-970b-4639-8744-3e63a0abfbd6" (UID: "9be96efe-970b-4639-8744-3e63a0abfbd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.539905 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.539936 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klfr6\" (UniqueName: \"kubernetes.io/projected/9be96efe-970b-4639-8744-3e63a0abfbd6-kube-api-access-klfr6\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.539946 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.558421 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-config-data" (OuterVolumeSpecName: "config-data") pod "9be96efe-970b-4639-8744-3e63a0abfbd6" (UID: "9be96efe-970b-4639-8744-3e63a0abfbd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.581074 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-glkkv" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.582186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-glkkv" event={"ID":"9be96efe-970b-4639-8744-3e63a0abfbd6","Type":"ContainerDied","Data":"a63439eed6240ff963e2ab3c85f961f6598944d29253a424179a3f1f619e48d2"} Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.582248 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63439eed6240ff963e2ab3c85f961f6598944d29253a424179a3f1f619e48d2" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.585938 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-fbkbh" event={"ID":"aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd","Type":"ContainerDied","Data":"e15f6efc887bdc09c26b7231c0958e8a6aaa227d1ace67a45bba0ca27d8b3de0"} Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.585968 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e15f6efc887bdc09c26b7231c0958e8a6aaa227d1ace67a45bba0ca27d8b3de0" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.586022 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-fbkbh" Dec 05 14:16:19 crc kubenswrapper[4858]: I1205 14:16:19.643395 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be96efe-970b-4639-8744-3e63a0abfbd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.013921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.346624 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8688cb6d6-l5l7t" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.448108 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d64554cfb-x842g"] Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.448524 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api-log" containerID="cri-o://fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd" gracePeriod=30 Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.451504 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" containerID="cri-o://596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b" gracePeriod=30 Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.474719 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.475359 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.682949 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:20 crc kubenswrapper[4858]: E1205 14:16:20.683557 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" containerName="cinder-db-sync" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.683574 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" containerName="cinder-db-sync" Dec 05 14:16:20 crc kubenswrapper[4858]: E1205 14:16:20.683600 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be96efe-970b-4639-8744-3e63a0abfbd6" containerName="heat-db-sync" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.683608 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be96efe-970b-4639-8744-3e63a0abfbd6" containerName="heat-db-sync" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.683778 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be96efe-970b-4639-8744-3e63a0abfbd6" containerName="heat-db-sync" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.683808 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" containerName="cinder-db-sync" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.684704 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.692605 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kbgwq" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.692871 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.693026 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.693115 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.720110 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.752496 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-8688cb6d6-l5l7t" podUID="4885ff0b-6266-49b4-a554-82b6504f932d" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.783089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.783237 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.783326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxxm\" (UniqueName: \"kubernetes.io/projected/ec3a579a-85b4-4fd9-814c-4355ed8813b4-kube-api-access-9nxxm\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.783349 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.783377 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.783403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a579a-85b4-4fd9-814c-4355ed8813b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.875611 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b99cfc7-qvxwd"] Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.877078 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.885732 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a579a-85b4-4fd9-814c-4355ed8813b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.885837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.885906 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.885950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxxm\" (UniqueName: \"kubernetes.io/projected/ec3a579a-85b4-4fd9-814c-4355ed8813b4-kube-api-access-9nxxm\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.885965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.885985 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.896230 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a579a-85b4-4fd9-814c-4355ed8813b4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.904113 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.916267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.916363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.916698 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-scripts\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.933812 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b99cfc7-qvxwd"] Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.996840 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndtvc\" (UniqueName: \"kubernetes.io/projected/53dddd76-03ec-457c-b202-4a181872ea4e-kube-api-access-ndtvc\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.996916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-config\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.996951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-sb\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.996979 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-nb\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.997016 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-swift-storage-0\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:20 crc kubenswrapper[4858]: I1205 14:16:20.997051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-svc\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.038381 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxxm\" (UniqueName: \"kubernetes.io/projected/ec3a579a-85b4-4fd9-814c-4355ed8813b4-kube-api-access-9nxxm\") pod \"cinder-scheduler-0\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.050355 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.098502 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-nb\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.098573 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-swift-storage-0\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.098613 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-svc\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.098675 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndtvc\" (UniqueName: \"kubernetes.io/projected/53dddd76-03ec-457c-b202-4a181872ea4e-kube-api-access-ndtvc\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.098720 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-config\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.098749 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-sb\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.099805 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-sb\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.100357 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-nb\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.100862 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-swift-storage-0\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.101638 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-config\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.102784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-svc\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.143475 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndtvc\" (UniqueName: \"kubernetes.io/projected/53dddd76-03ec-457c-b202-4a181872ea4e-kube-api-access-ndtvc\") pod \"dnsmasq-dns-666b99cfc7-qvxwd\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.415151 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.542526 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.550426 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.558533 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.600408 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.666796 4858 generic.go:334] "Generic (PLEG): container finished" podID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerID="fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd" exitCode=143 Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.666851 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64554cfb-x842g" event={"ID":"f405006f-5489-4c10-916b-c1118b7a3bd7","Type":"ContainerDied","Data":"fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd"} Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.709443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.709505 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-logs\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.709527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.709549 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.709641 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.710107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-scripts\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.710212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6fn\" (UniqueName: \"kubernetes.io/projected/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-kube-api-access-xf6fn\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-scripts\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf6fn\" (UniqueName: \"kubernetes.io/projected/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-kube-api-access-xf6fn\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812267 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-logs\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812291 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.812359 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.813321 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-logs\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.813388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.820591 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.821588 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.829869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data-custom\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.840274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-scripts\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.848867 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf6fn\" (UniqueName: \"kubernetes.io/projected/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-kube-api-access-xf6fn\") pod \"cinder-api-0\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " pod="openstack/cinder-api-0" Dec 05 14:16:21 crc kubenswrapper[4858]: I1205 14:16:21.890243 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 14:16:22 crc kubenswrapper[4858]: I1205 14:16:22.264447 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:16:23 crc kubenswrapper[4858]: E1205 14:16:23.385053 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 05 14:16:23 crc kubenswrapper[4858]: E1205 14:16:23.385337 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9g47j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(30bc8a2e-6170-4c4e-9289-ba46ae2768e8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 05 14:16:23 crc kubenswrapper[4858]: E1205 14:16:23.387639 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.577036 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.609704 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.665335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-horizon-secret-key\") pod \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.665591 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-logs\") pod \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.665635 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-scripts\") pod \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.665760 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbp84\" (UniqueName: \"kubernetes.io/projected/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-kube-api-access-rbp84\") pod \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.665866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-config-data\") pod \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\" (UID: \"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.676256 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-logs" (OuterVolumeSpecName: "logs") pod "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" (UID: "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.732040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" (UID: "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.754956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-kube-api-access-rbp84" (OuterVolumeSpecName: "kube-api-access-rbp84") pod "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" (UID: "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c"). InnerVolumeSpecName "kube-api-access-rbp84". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.783747 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvnrg\" (UniqueName: \"kubernetes.io/projected/303565f4-49fa-4a41-9884-c801202229cb-kube-api-access-fvnrg\") pod \"303565f4-49fa-4a41-9884-c801202229cb\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.783803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-config\") pod \"303565f4-49fa-4a41-9884-c801202229cb\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784009 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-swift-storage-0\") pod \"303565f4-49fa-4a41-9884-c801202229cb\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784071 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-sb\") pod \"303565f4-49fa-4a41-9884-c801202229cb\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784095 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-svc\") pod \"303565f4-49fa-4a41-9884-c801202229cb\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-nb\") pod \"303565f4-49fa-4a41-9884-c801202229cb\" (UID: \"303565f4-49fa-4a41-9884-c801202229cb\") " Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784502 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784515 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbp84\" (UniqueName: \"kubernetes.io/projected/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-kube-api-access-rbp84\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.784529 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.824937 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303565f4-49fa-4a41-9884-c801202229cb-kube-api-access-fvnrg" (OuterVolumeSpecName: "kube-api-access-fvnrg") pod "303565f4-49fa-4a41-9884-c801202229cb" (UID: "303565f4-49fa-4a41-9884-c801202229cb"). InnerVolumeSpecName "kube-api-access-fvnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.827395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-scripts" (OuterVolumeSpecName: "scripts") pod "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" (UID: "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.848065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" event={"ID":"303565f4-49fa-4a41-9884-c801202229cb","Type":"ContainerDied","Data":"ffe253cdca960cffe0bc0694ab8a2da3492b95b282feaa908b8dce982654dfb7"} Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.848141 4858 scope.go:117] "RemoveContainer" containerID="03b65ed9d886af294a30dc7ae883fd1c935d17612c5da3b40714a50a31b4c17d" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.848285 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f889b9ff-thbrt" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.851463 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-config-data" (OuterVolumeSpecName: "config-data") pod "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" (UID: "9a7abe6e-8eda-4e8b-8974-53b4eeefed9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.875294 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="ceilometer-notification-agent" containerID="cri-o://b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6" gracePeriod=30 Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.875439 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-575f67464c-nsrld" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.876914 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-575f67464c-nsrld" event={"ID":"9a7abe6e-8eda-4e8b-8974-53b4eeefed9c","Type":"ContainerDied","Data":"f731d6b3456bcd2721be00ef7f6299283449ebbe48afe1bb26d1b8b34e27decb"} Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.877298 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="sg-core" containerID="cri-o://c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c" gracePeriod=30 Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.887497 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.887517 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvnrg\" (UniqueName: \"kubernetes.io/projected/303565f4-49fa-4a41-9884-c801202229cb-kube-api-access-fvnrg\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.887527 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:23 crc kubenswrapper[4858]: I1205 14:16:23.922492 4858 scope.go:117] "RemoveContainer" containerID="ac84971d27886f81ae6af45f3b6c96072e8cdea8f5a28312299366d4aac1e083" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.007183 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-575f67464c-nsrld"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.052111 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-config" (OuterVolumeSpecName: "config") pod "303565f4-49fa-4a41-9884-c801202229cb" (UID: "303565f4-49fa-4a41-9884-c801202229cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.052194 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-575f67464c-nsrld"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.061933 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "303565f4-49fa-4a41-9884-c801202229cb" (UID: "303565f4-49fa-4a41-9884-c801202229cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.070192 4858 scope.go:117] "RemoveContainer" containerID="2d72059b626402160d5fe6efc207e1fdabb4d9e3e9be37933836671ce03f6f1d" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.072676 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.090690 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.090728 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.124174 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "303565f4-49fa-4a41-9884-c801202229cb" (UID: "303565f4-49fa-4a41-9884-c801202229cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.144040 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "303565f4-49fa-4a41-9884-c801202229cb" (UID: "303565f4-49fa-4a41-9884-c801202229cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.148281 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "303565f4-49fa-4a41-9884-c801202229cb" (UID: "303565f4-49fa-4a41-9884-c801202229cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.173752 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.192672 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.192712 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.192724 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/303565f4-49fa-4a41-9884-c801202229cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.222448 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f889b9ff-thbrt"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.237626 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69f889b9ff-thbrt"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.373961 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.430282 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b99cfc7-qvxwd"] Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.441933 4858 scope.go:117] "RemoveContainer" containerID="fff9443d5f06d90d5763eef075c535599704e5911c68b84cf3f3a9a6ddd9ab9d" Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.898251 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a579a-85b4-4fd9-814c-4355ed8813b4","Type":"ContainerStarted","Data":"c033e333beca9bc083217e1a2b5c91f5e018fb83ab5fbbecfeb9dc4eb1d8c751"} Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.909374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" event={"ID":"53dddd76-03ec-457c-b202-4a181872ea4e","Type":"ContainerStarted","Data":"d3bc17a4006225723184d21774e53bb25f7bd6b75e0bc00e11e7eb980ae31da9"} Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.918383 4858 generic.go:334] "Generic (PLEG): container finished" podID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerID="c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c" exitCode=2 Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.918454 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bc8a2e-6170-4c4e-9289-ba46ae2768e8","Type":"ContainerDied","Data":"c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c"} Dec 05 14:16:24 crc kubenswrapper[4858]: I1205 14:16:24.919757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4946b64a-bc5a-46af-9dbc-418c9b81a4ce","Type":"ContainerStarted","Data":"dc5e27474f619dc051ec0bec6e74d2a9c4298b88ddfd41e89cecc32b1258ec5c"} Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.267997 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-8688cb6d6-l5l7t" podUID="4885ff0b-6266-49b4-a554-82b6504f932d" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.163:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.521062 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.917175 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303565f4-49fa-4a41-9884-c801202229cb" path="/var/lib/kubelet/pods/303565f4-49fa-4a41-9884-c801202229cb/volumes" Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.929259 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" path="/var/lib/kubelet/pods/9a7abe6e-8eda-4e8b-8974-53b4eeefed9c/volumes" Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.966631 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4946b64a-bc5a-46af-9dbc-418c9b81a4ce","Type":"ContainerStarted","Data":"9c4b8163e8ce0773769c3e4a7e4bce17dc13712eadb4392eb57e981561495ef2"} Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.978707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a579a-85b4-4fd9-814c-4355ed8813b4","Type":"ContainerStarted","Data":"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f"} Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.987583 4858 generic.go:334] "Generic (PLEG): container finished" podID="53dddd76-03ec-457c-b202-4a181872ea4e" containerID="a55d7333332722b518378055f8df3a923450c64c9664feb818658234a1a1ece8" exitCode=0 Dec 05 14:16:25 crc kubenswrapper[4858]: I1205 14:16:25.987625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" event={"ID":"53dddd76-03ec-457c-b202-4a181872ea4e","Type":"ContainerDied","Data":"a55d7333332722b518378055f8df3a923450c64c9664feb818658234a1a1ece8"} Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.037994 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:55562->10.217.0.162:9311: read: connection reset by peer" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.040621 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d64554cfb-x842g" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:55572->10.217.0.162:9311: read: connection reset by peer" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.040807 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.720345 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.815470 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.825008 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c6dddcfdd-5kzc7"] Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.825238 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c6dddcfdd-5kzc7" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-api" containerID="cri-o://b778d8fc7b39e5648781ead32eb7b0aca9b90862151db9ab77351a6069a6f47a" gracePeriod=30 Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.825374 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6c6dddcfdd-5kzc7" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-httpd" containerID="cri-o://871fb8f5ccdeaffdbec27df82e46b0ee2ee341c1a450ef81050b37d03ebdf571" gracePeriod=30 Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.950885 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g47j\" (UniqueName: \"kubernetes.io/projected/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-kube-api-access-9g47j\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980114 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-config-data\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-log-httpd\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980248 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-scripts\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980281 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-run-httpd\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980438 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-sg-core-conf-yaml\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.980511 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-combined-ca-bundle\") pod \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\" (UID: \"30bc8a2e-6170-4c4e-9289-ba46ae2768e8\") " Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.987409 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:26 crc kubenswrapper[4858]: I1205 14:16:26.991067 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.023859 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-kube-api-access-9g47j" (OuterVolumeSpecName: "kube-api-access-9g47j") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "kube-api-access-9g47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.030812 4858 generic.go:334] "Generic (PLEG): container finished" podID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerID="596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b" exitCode=0 Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.030905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64554cfb-x842g" event={"ID":"f405006f-5489-4c10-916b-c1118b7a3bd7","Type":"ContainerDied","Data":"596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b"} Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.030956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d64554cfb-x842g" event={"ID":"f405006f-5489-4c10-916b-c1118b7a3bd7","Type":"ContainerDied","Data":"0ad509bfb55fd04c3d4317366fb075de926b1e869dc21f57f6346956020124df"} Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.030972 4858 scope.go:117] "RemoveContainer" containerID="596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.031188 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d64554cfb-x842g" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.031362 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-scripts" (OuterVolumeSpecName: "scripts") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.043420 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" event={"ID":"53dddd76-03ec-457c-b202-4a181872ea4e","Type":"ContainerStarted","Data":"1814f53c081a8a90103d71f6d46b6aa20251d09ab30a4bd3bf4dfed963a9c251"} Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.044376 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.050500 4858 generic.go:334] "Generic (PLEG): container finished" podID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerID="b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6" exitCode=0 Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.050553 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bc8a2e-6170-4c4e-9289-ba46ae2768e8","Type":"ContainerDied","Data":"b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6"} Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.050580 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bc8a2e-6170-4c4e-9289-ba46ae2768e8","Type":"ContainerDied","Data":"626d8daf5fc92f24329df27ba269f8edc744f5d0d6c81d64b279c58b29bb4f38"} Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.050654 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.076810 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" podStartSLOduration=7.076791864 podStartE2EDuration="7.076791864s" podCreationTimestamp="2025-12-05 14:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:27.064423847 +0000 UTC m=+1195.612021986" watchObservedRunningTime="2025-12-05 14:16:27.076791864 +0000 UTC m=+1195.624390004" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.083866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data-custom\") pod \"f405006f-5489-4c10-916b-c1118b7a3bd7\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.083958 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data\") pod \"f405006f-5489-4c10-916b-c1118b7a3bd7\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f405006f-5489-4c10-916b-c1118b7a3bd7-logs\") pod \"f405006f-5489-4c10-916b-c1118b7a3bd7\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084101 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-combined-ca-bundle\") pod \"f405006f-5489-4c10-916b-c1118b7a3bd7\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x94rh\" (UniqueName: \"kubernetes.io/projected/f405006f-5489-4c10-916b-c1118b7a3bd7-kube-api-access-x94rh\") pod \"f405006f-5489-4c10-916b-c1118b7a3bd7\" (UID: \"f405006f-5489-4c10-916b-c1118b7a3bd7\") " Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084615 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g47j\" (UniqueName: \"kubernetes.io/projected/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-kube-api-access-9g47j\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084633 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084642 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.084650 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.092981 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f405006f-5489-4c10-916b-c1118b7a3bd7-kube-api-access-x94rh" (OuterVolumeSpecName: "kube-api-access-x94rh") pod "f405006f-5489-4c10-916b-c1118b7a3bd7" (UID: "f405006f-5489-4c10-916b-c1118b7a3bd7"). InnerVolumeSpecName "kube-api-access-x94rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.100454 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f405006f-5489-4c10-916b-c1118b7a3bd7-logs" (OuterVolumeSpecName: "logs") pod "f405006f-5489-4c10-916b-c1118b7a3bd7" (UID: "f405006f-5489-4c10-916b-c1118b7a3bd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.119049 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.127623 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-config-data" (OuterVolumeSpecName: "config-data") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.132774 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30bc8a2e-6170-4c4e-9289-ba46ae2768e8" (UID: "30bc8a2e-6170-4c4e-9289-ba46ae2768e8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.132806 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f405006f-5489-4c10-916b-c1118b7a3bd7" (UID: "f405006f-5489-4c10-916b-c1118b7a3bd7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.155992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f405006f-5489-4c10-916b-c1118b7a3bd7" (UID: "f405006f-5489-4c10-916b-c1118b7a3bd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.179001 4858 scope.go:117] "RemoveContainer" containerID="fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186040 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186070 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x94rh\" (UniqueName: \"kubernetes.io/projected/f405006f-5489-4c10-916b-c1118b7a3bd7-kube-api-access-x94rh\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186081 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186092 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186101 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186111 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bc8a2e-6170-4c4e-9289-ba46ae2768e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.186120 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f405006f-5489-4c10-916b-c1118b7a3bd7-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.197087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data" (OuterVolumeSpecName: "config-data") pod "f405006f-5489-4c10-916b-c1118b7a3bd7" (UID: "f405006f-5489-4c10-916b-c1118b7a3bd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.236698 4858 scope.go:117] "RemoveContainer" containerID="596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.238961 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b\": container with ID starting with 596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b not found: ID does not exist" containerID="596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.239006 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b"} err="failed to get container status \"596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b\": rpc error: code = NotFound desc = could not find container \"596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b\": container with ID starting with 596d0a56d1a1106ee6318f664ad43007316372c18b1d1fd69e420b89689ada0b not found: ID does not exist" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.239036 4858 scope.go:117] "RemoveContainer" containerID="fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.239366 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd\": container with ID starting with fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd not found: ID does not exist" containerID="fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.239389 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd"} err="failed to get container status \"fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd\": rpc error: code = NotFound desc = could not find container \"fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd\": container with ID starting with fcc5fc4266ab864ee9f34e3d787d561118ad464fe92887e286a89376dc0578fd not found: ID does not exist" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.239405 4858 scope.go:117] "RemoveContainer" containerID="c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.279880 4858 scope.go:117] "RemoveContainer" containerID="b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.287456 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f405006f-5489-4c10-916b-c1118b7a3bd7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.305186 4858 scope.go:117] "RemoveContainer" containerID="c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.308287 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c\": container with ID starting with c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c not found: ID does not exist" containerID="c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.308318 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c"} err="failed to get container status \"c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c\": rpc error: code = NotFound desc = could not find container \"c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c\": container with ID starting with c7c36b6a4758c16f5df0e801cbfd2c2659ad6eb85929764ceae2e602f3c6d48c not found: ID does not exist" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.308342 4858 scope.go:117] "RemoveContainer" containerID="b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.308776 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6\": container with ID starting with b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6 not found: ID does not exist" containerID="b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.308860 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6"} err="failed to get container status \"b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6\": rpc error: code = NotFound desc = could not find container \"b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6\": container with ID starting with b64cbc10e562c94e7a3c5b918777cc325bc019414fa180d21d6b6c9885aa7aa6 not found: ID does not exist" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.382986 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d64554cfb-x842g"] Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.406107 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d64554cfb-x842g"] Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.507140 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.520885 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546310 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546697 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546711 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546722 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="ceilometer-notification-agent" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546728 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="ceilometer-notification-agent" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546745 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon-log" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546751 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon-log" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546763 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="dnsmasq-dns" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546769 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="dnsmasq-dns" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546778 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="init" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546783 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="init" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546794 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="sg-core" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546800 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="sg-core" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546810 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api-log" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546815 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api-log" Dec 05 14:16:27 crc kubenswrapper[4858]: E1205 14:16:27.546839 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.546845 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547021 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547035 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="sg-core" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547049 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547058 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" containerName="ceilometer-notification-agent" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547074 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="303565f4-49fa-4a41-9884-c801202229cb" containerName="dnsmasq-dns" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547083 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" containerName="barbican-api-log" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.547091 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7abe6e-8eda-4e8b-8974-53b4eeefed9c" containerName="horizon-log" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.548630 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.554065 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.554156 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.569738 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.695878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-log-httpd\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.695974 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.696005 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.696039 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-scripts\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.696055 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7w5l\" (UniqueName: \"kubernetes.io/projected/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-kube-api-access-b7w5l\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.696175 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-run-httpd\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.696316 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-config-data\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798702 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-scripts\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798727 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7w5l\" (UniqueName: \"kubernetes.io/projected/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-kube-api-access-b7w5l\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798774 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-run-httpd\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798846 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-config-data\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798908 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-log-httpd\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.798987 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.800456 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-log-httpd\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.800734 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-run-httpd\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.806201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.811254 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-config-data\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.812007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.812064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-scripts\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.833152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7w5l\" (UniqueName: \"kubernetes.io/projected/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-kube-api-access-b7w5l\") pod \"ceilometer-0\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.868086 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.917924 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bc8a2e-6170-4c4e-9289-ba46ae2768e8" path="/var/lib/kubelet/pods/30bc8a2e-6170-4c4e-9289-ba46ae2768e8/volumes" Dec 05 14:16:27 crc kubenswrapper[4858]: I1205 14:16:27.918708 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f405006f-5489-4c10-916b-c1118b7a3bd7" path="/var/lib/kubelet/pods/f405006f-5489-4c10-916b-c1118b7a3bd7/volumes" Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.082213 4858 generic.go:334] "Generic (PLEG): container finished" podID="a08a4143-92f7-4cc4-a600-a5449137a190" containerID="871fb8f5ccdeaffdbec27df82e46b0ee2ee341c1a450ef81050b37d03ebdf571" exitCode=0 Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.082273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6dddcfdd-5kzc7" event={"ID":"a08a4143-92f7-4cc4-a600-a5449137a190","Type":"ContainerDied","Data":"871fb8f5ccdeaffdbec27df82e46b0ee2ee341c1a450ef81050b37d03ebdf571"} Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.085933 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4946b64a-bc5a-46af-9dbc-418c9b81a4ce","Type":"ContainerStarted","Data":"276fc3baa06fa66ffa99b8d58fc8b0503b4e33dca0541f0d7e7c507c6c7c4032"} Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.086068 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api-log" containerID="cri-o://9c4b8163e8ce0773769c3e4a7e4bce17dc13712eadb4392eb57e981561495ef2" gracePeriod=30 Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.086364 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.086605 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api" containerID="cri-o://276fc3baa06fa66ffa99b8d58fc8b0503b4e33dca0541f0d7e7c507c6c7c4032" gracePeriod=30 Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.124584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a579a-85b4-4fd9-814c-4355ed8813b4","Type":"ContainerStarted","Data":"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5"} Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.199747 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.19972185 podStartE2EDuration="7.19972185s" podCreationTimestamp="2025-12-05 14:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:28.122558963 +0000 UTC m=+1196.670157102" watchObservedRunningTime="2025-12-05 14:16:28.19972185 +0000 UTC m=+1196.747319989" Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.205147 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.3172188 podStartE2EDuration="8.205127477s" podCreationTimestamp="2025-12-05 14:16:20 +0000 UTC" firstStartedPulling="2025-12-05 14:16:24.119074075 +0000 UTC m=+1192.666672204" lastFinishedPulling="2025-12-05 14:16:25.006982742 +0000 UTC m=+1193.554580881" observedRunningTime="2025-12-05 14:16:28.192518633 +0000 UTC m=+1196.740116772" watchObservedRunningTime="2025-12-05 14:16:28.205127477 +0000 UTC m=+1196.752725616" Dec 05 14:16:28 crc kubenswrapper[4858]: I1205 14:16:28.486377 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.162942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerStarted","Data":"9afd2a9263eaa12e338eaed0d7523b4d4d9e4906319bc150597d4a5689702469"} Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.164025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerStarted","Data":"97db944343d91803e59b65e0b477c6235552f1c960f04d6525bcbc2275267991"} Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.165777 4858 generic.go:334] "Generic (PLEG): container finished" podID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerID="276fc3baa06fa66ffa99b8d58fc8b0503b4e33dca0541f0d7e7c507c6c7c4032" exitCode=0 Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.165801 4858 generic.go:334] "Generic (PLEG): container finished" podID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerID="9c4b8163e8ce0773769c3e4a7e4bce17dc13712eadb4392eb57e981561495ef2" exitCode=143 Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.166037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4946b64a-bc5a-46af-9dbc-418c9b81a4ce","Type":"ContainerDied","Data":"276fc3baa06fa66ffa99b8d58fc8b0503b4e33dca0541f0d7e7c507c6c7c4032"} Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.166110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4946b64a-bc5a-46af-9dbc-418c9b81a4ce","Type":"ContainerDied","Data":"9c4b8163e8ce0773769c3e4a7e4bce17dc13712eadb4392eb57e981561495ef2"} Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.246075 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.361777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-logs\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.361900 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-combined-ca-bundle\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.361975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-etc-machine-id\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.362372 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-logs" (OuterVolumeSpecName: "logs") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.362607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.362927 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-scripts\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.363349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.363434 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data-custom\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.363478 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf6fn\" (UniqueName: \"kubernetes.io/projected/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-kube-api-access-xf6fn\") pod \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\" (UID: \"4946b64a-bc5a-46af-9dbc-418c9b81a4ce\") " Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.363945 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.363967 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.367681 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-scripts" (OuterVolumeSpecName: "scripts") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.367711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.370469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-kube-api-access-xf6fn" (OuterVolumeSpecName: "kube-api-access-xf6fn") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "kube-api-access-xf6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.395700 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.432091 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data" (OuterVolumeSpecName: "config-data") pod "4946b64a-bc5a-46af-9dbc-418c9b81a4ce" (UID: "4946b64a-bc5a-46af-9dbc-418c9b81a4ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.465686 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.465723 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.465732 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.465740 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:29 crc kubenswrapper[4858]: I1205 14:16:29.465748 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf6fn\" (UniqueName: \"kubernetes.io/projected/4946b64a-bc5a-46af-9dbc-418c9b81a4ce-kube-api-access-xf6fn\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.175391 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerStarted","Data":"9982f8bdcdb41c75025d4eb2256f597ef4c8ba6b068c241390274b068c743a46"} Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.175959 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerStarted","Data":"db61723d5246cfd2cd8bb1ec41a822bbd139257db4d6bb6d8fbfa929f8911725"} Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.177954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4946b64a-bc5a-46af-9dbc-418c9b81a4ce","Type":"ContainerDied","Data":"dc5e27474f619dc051ec0bec6e74d2a9c4298b88ddfd41e89cecc32b1258ec5c"} Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.178065 4858 scope.go:117] "RemoveContainer" containerID="276fc3baa06fa66ffa99b8d58fc8b0503b4e33dca0541f0d7e7c507c6c7c4032" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.178012 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.199259 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.212040 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.240566 4858 scope.go:117] "RemoveContainer" containerID="9c4b8163e8ce0773769c3e4a7e4bce17dc13712eadb4392eb57e981561495ef2" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.246772 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:30 crc kubenswrapper[4858]: E1205 14:16:30.247169 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.247187 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api" Dec 05 14:16:30 crc kubenswrapper[4858]: E1205 14:16:30.247230 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api-log" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.247237 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api-log" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.247391 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api-log" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.247418 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" containerName="cinder-api" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.248325 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.274258 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.274902 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.275037 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.293172 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.382990 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-public-tls-certs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383088 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048ced77-bd4f-48c2-90f3-13081773f309-logs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqv4\" (UniqueName: \"kubernetes.io/projected/048ced77-bd4f-48c2-90f3-13081773f309-kube-api-access-dfqv4\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/048ced77-bd4f-48c2-90f3-13081773f309-etc-machine-id\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-config-data\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383214 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-config-data-custom\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383253 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.383295 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-scripts\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048ced77-bd4f-48c2-90f3-13081773f309-logs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485122 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfqv4\" (UniqueName: \"kubernetes.io/projected/048ced77-bd4f-48c2-90f3-13081773f309-kube-api-access-dfqv4\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/048ced77-bd4f-48c2-90f3-13081773f309-etc-machine-id\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485157 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-config-data\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485173 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-config-data-custom\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-scripts\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.485288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-public-tls-certs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.487363 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/048ced77-bd4f-48c2-90f3-13081773f309-etc-machine-id\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.487460 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/048ced77-bd4f-48c2-90f3-13081773f309-logs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.492662 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.494368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-config-data-custom\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.494762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-public-tls-certs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.502645 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.507028 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfqv4\" (UniqueName: \"kubernetes.io/projected/048ced77-bd4f-48c2-90f3-13081773f309-kube-api-access-dfqv4\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.508063 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-scripts\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.513525 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/048ced77-bd4f-48c2-90f3-13081773f309-config-data\") pod \"cinder-api-0\" (UID: \"048ced77-bd4f-48c2-90f3-13081773f309\") " pod="openstack/cinder-api-0" Dec 05 14:16:30 crc kubenswrapper[4858]: I1205 14:16:30.578481 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.054250 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.233252 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.417053 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.524302 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697d8bbbf9-dvsmf"] Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.524786 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="dnsmasq-dns" containerID="cri-o://0a93d7fdb619e704b967838726a55e7605b8aa9fee2d39de83fac2d1e4444f63" gracePeriod=10 Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.700046 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.768455 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:31 crc kubenswrapper[4858]: I1205 14:16:31.955690 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4946b64a-bc5a-46af-9dbc-418c9b81a4ce" path="/var/lib/kubelet/pods/4946b64a-bc5a-46af-9dbc-418c9b81a4ce/volumes" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.019929 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.272183 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"048ced77-bd4f-48c2-90f3-13081773f309","Type":"ContainerStarted","Data":"d41b3220b41f9dfcb38e7723d0232fcb5db7458e58c7b133b66b1014d8c306f0"} Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.295376 4858 generic.go:334] "Generic (PLEG): container finished" podID="545af5cd-079a-4dab-a389-163d5560a8f5" containerID="0a93d7fdb619e704b967838726a55e7605b8aa9fee2d39de83fac2d1e4444f63" exitCode=0 Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.295465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" event={"ID":"545af5cd-079a-4dab-a389-163d5560a8f5","Type":"ContainerDied","Data":"0a93d7fdb619e704b967838726a55e7605b8aa9fee2d39de83fac2d1e4444f63"} Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.298182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerStarted","Data":"e18c24c63613c3944b010ae25bc3045b70b75167e6d86333d69e9ec0e391033a"} Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.299324 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.313625 4858 generic.go:334] "Generic (PLEG): container finished" podID="a08a4143-92f7-4cc4-a600-a5449137a190" containerID="b778d8fc7b39e5648781ead32eb7b0aca9b90862151db9ab77351a6069a6f47a" exitCode=0 Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.313849 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="cinder-scheduler" containerID="cri-o://8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f" gracePeriod=30 Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.314110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6dddcfdd-5kzc7" event={"ID":"a08a4143-92f7-4cc4-a600-a5449137a190","Type":"ContainerDied","Data":"b778d8fc7b39e5648781ead32eb7b0aca9b90862151db9ab77351a6069a6f47a"} Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.314171 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="probe" containerID="cri-o://892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5" gracePeriod=30 Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.343057 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.726473993 podStartE2EDuration="5.343032846s" podCreationTimestamp="2025-12-05 14:16:27 +0000 UTC" firstStartedPulling="2025-12-05 14:16:28.482082911 +0000 UTC m=+1197.029681050" lastFinishedPulling="2025-12-05 14:16:31.098641764 +0000 UTC m=+1199.646239903" observedRunningTime="2025-12-05 14:16:32.337434563 +0000 UTC m=+1200.885032692" watchObservedRunningTime="2025-12-05 14:16:32.343032846 +0000 UTC m=+1200.890630985" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.406428 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7578ddfc8d-65llf" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.687232 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.773870 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-config\") pod \"545af5cd-079a-4dab-a389-163d5560a8f5\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.773994 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-nb\") pod \"545af5cd-079a-4dab-a389-163d5560a8f5\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.774034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-swift-storage-0\") pod \"545af5cd-079a-4dab-a389-163d5560a8f5\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.774147 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7qw7\" (UniqueName: \"kubernetes.io/projected/545af5cd-079a-4dab-a389-163d5560a8f5-kube-api-access-h7qw7\") pod \"545af5cd-079a-4dab-a389-163d5560a8f5\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.774196 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-sb\") pod \"545af5cd-079a-4dab-a389-163d5560a8f5\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.774285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-svc\") pod \"545af5cd-079a-4dab-a389-163d5560a8f5\" (UID: \"545af5cd-079a-4dab-a389-163d5560a8f5\") " Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.835884 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545af5cd-079a-4dab-a389-163d5560a8f5-kube-api-access-h7qw7" (OuterVolumeSpecName: "kube-api-access-h7qw7") pod "545af5cd-079a-4dab-a389-163d5560a8f5" (UID: "545af5cd-079a-4dab-a389-163d5560a8f5"). InnerVolumeSpecName "kube-api-access-h7qw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.876468 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7qw7\" (UniqueName: \"kubernetes.io/projected/545af5cd-079a-4dab-a389-163d5560a8f5-kube-api-access-h7qw7\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.926625 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "545af5cd-079a-4dab-a389-163d5560a8f5" (UID: "545af5cd-079a-4dab-a389-163d5560a8f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:32 crc kubenswrapper[4858]: I1205 14:16:32.978636 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.002178 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "545af5cd-079a-4dab-a389-163d5560a8f5" (UID: "545af5cd-079a-4dab-a389-163d5560a8f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.036552 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "545af5cd-079a-4dab-a389-163d5560a8f5" (UID: "545af5cd-079a-4dab-a389-163d5560a8f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.039145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "545af5cd-079a-4dab-a389-163d5560a8f5" (UID: "545af5cd-079a-4dab-a389-163d5560a8f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.039072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-config" (OuterVolumeSpecName: "config") pod "545af5cd-079a-4dab-a389-163d5560a8f5" (UID: "545af5cd-079a-4dab-a389-163d5560a8f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.105194 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.105409 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.105486 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.105547 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/545af5cd-079a-4dab-a389-163d5560a8f5-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.221034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.311299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrppx\" (UniqueName: \"kubernetes.io/projected/a08a4143-92f7-4cc4-a600-a5449137a190-kube-api-access-qrppx\") pod \"a08a4143-92f7-4cc4-a600-a5449137a190\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.311473 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-ovndb-tls-certs\") pod \"a08a4143-92f7-4cc4-a600-a5449137a190\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.311504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-httpd-config\") pod \"a08a4143-92f7-4cc4-a600-a5449137a190\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.311572 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-combined-ca-bundle\") pod \"a08a4143-92f7-4cc4-a600-a5449137a190\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.311606 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-config\") pod \"a08a4143-92f7-4cc4-a600-a5449137a190\" (UID: \"a08a4143-92f7-4cc4-a600-a5449137a190\") " Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.315292 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08a4143-92f7-4cc4-a600-a5449137a190-kube-api-access-qrppx" (OuterVolumeSpecName: "kube-api-access-qrppx") pod "a08a4143-92f7-4cc4-a600-a5449137a190" (UID: "a08a4143-92f7-4cc4-a600-a5449137a190"). InnerVolumeSpecName "kube-api-access-qrppx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.321582 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a08a4143-92f7-4cc4-a600-a5449137a190" (UID: "a08a4143-92f7-4cc4-a600-a5449137a190"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.357776 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.357876 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" event={"ID":"545af5cd-079a-4dab-a389-163d5560a8f5","Type":"ContainerDied","Data":"d35e5204220a6e14eda81ec2825241445119dc5a27176c7db53121c02488fd70"} Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.357957 4858 scope.go:117] "RemoveContainer" containerID="0a93d7fdb619e704b967838726a55e7605b8aa9fee2d39de83fac2d1e4444f63" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.363717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6dddcfdd-5kzc7" event={"ID":"a08a4143-92f7-4cc4-a600-a5449137a190","Type":"ContainerDied","Data":"e5a1cb8b2894fc256fef2cb14c3069b5a02710427d4dc8e83f85f132c8b1a463"} Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.363809 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6dddcfdd-5kzc7" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.404274 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"048ced77-bd4f-48c2-90f3-13081773f309","Type":"ContainerStarted","Data":"b90d130a44de6ea4ac70804ee2d5a948adbe4d8d8934de8ece41fe37d928ea84"} Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.414093 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrppx\" (UniqueName: \"kubernetes.io/projected/a08a4143-92f7-4cc4-a600-a5449137a190-kube-api-access-qrppx\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.414124 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.414637 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a08a4143-92f7-4cc4-a600-a5449137a190" (UID: "a08a4143-92f7-4cc4-a600-a5449137a190"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.418581 4858 scope.go:117] "RemoveContainer" containerID="7a7880b1c9dc419401b73d629461dab77a7cdb75438300e63daa7a00ffe67189" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.418918 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697d8bbbf9-dvsmf"] Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.420789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-config" (OuterVolumeSpecName: "config") pod "a08a4143-92f7-4cc4-a600-a5449137a190" (UID: "a08a4143-92f7-4cc4-a600-a5449137a190"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.427027 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697d8bbbf9-dvsmf"] Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.498513 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a08a4143-92f7-4cc4-a600-a5449137a190" (UID: "a08a4143-92f7-4cc4-a600-a5449137a190"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.501399 4858 scope.go:117] "RemoveContainer" containerID="871fb8f5ccdeaffdbec27df82e46b0ee2ee341c1a450ef81050b37d03ebdf571" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.527805 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.528091 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.528104 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a08a4143-92f7-4cc4-a600-a5449137a190-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.549338 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7dbd4c4c5b-8skvw" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.681495 4858 scope.go:117] "RemoveContainer" containerID="b778d8fc7b39e5648781ead32eb7b0aca9b90862151db9ab77351a6069a6f47a" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.722446 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6c6dddcfdd-5kzc7"] Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.728629 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6c6dddcfdd-5kzc7"] Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.942884 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" path="/var/lib/kubelet/pods/545af5cd-079a-4dab-a389-163d5560a8f5/volumes" Dec 05 14:16:33 crc kubenswrapper[4858]: I1205 14:16:33.943440 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" path="/var/lib/kubelet/pods/a08a4143-92f7-4cc4-a600-a5449137a190/volumes" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.405214 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414273 4858 generic.go:334] "Generic (PLEG): container finished" podID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerID="892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5" exitCode=0 Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414301 4858 generic.go:334] "Generic (PLEG): container finished" podID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerID="8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f" exitCode=0 Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a579a-85b4-4fd9-814c-4355ed8813b4","Type":"ContainerDied","Data":"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5"} Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414368 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a579a-85b4-4fd9-814c-4355ed8813b4","Type":"ContainerDied","Data":"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f"} Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ec3a579a-85b4-4fd9-814c-4355ed8813b4","Type":"ContainerDied","Data":"c033e333beca9bc083217e1a2b5c91f5e018fb83ab5fbbecfeb9dc4eb1d8c751"} Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.414414 4858 scope.go:117] "RemoveContainer" containerID="892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.419954 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"048ced77-bd4f-48c2-90f3-13081773f309","Type":"ContainerStarted","Data":"873b942d92ece07561a779fb617a884e9a36ddb109568ee079752d482460c775"} Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.441445 4858 scope.go:117] "RemoveContainer" containerID="8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445656 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a579a-85b4-4fd9-814c-4355ed8813b4-etc-machine-id\") pod \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-combined-ca-bundle\") pod \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec3a579a-85b4-4fd9-814c-4355ed8813b4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec3a579a-85b4-4fd9-814c-4355ed8813b4" (UID: "ec3a579a-85b4-4fd9-814c-4355ed8813b4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445870 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-scripts\") pod \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445901 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data\") pod \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxxm\" (UniqueName: \"kubernetes.io/projected/ec3a579a-85b4-4fd9-814c-4355ed8813b4-kube-api-access-9nxxm\") pod \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.445953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data-custom\") pod \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\" (UID: \"ec3a579a-85b4-4fd9-814c-4355ed8813b4\") " Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.446378 4858 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec3a579a-85b4-4fd9-814c-4355ed8813b4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.496174 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-scripts" (OuterVolumeSpecName: "scripts") pod "ec3a579a-85b4-4fd9-814c-4355ed8813b4" (UID: "ec3a579a-85b4-4fd9-814c-4355ed8813b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.514116 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3a579a-85b4-4fd9-814c-4355ed8813b4-kube-api-access-9nxxm" (OuterVolumeSpecName: "kube-api-access-9nxxm") pod "ec3a579a-85b4-4fd9-814c-4355ed8813b4" (UID: "ec3a579a-85b4-4fd9-814c-4355ed8813b4"). InnerVolumeSpecName "kube-api-access-9nxxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.514222 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec3a579a-85b4-4fd9-814c-4355ed8813b4" (UID: "ec3a579a-85b4-4fd9-814c-4355ed8813b4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.514348 4858 scope.go:117] "RemoveContainer" containerID="892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.521108 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5\": container with ID starting with 892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5 not found: ID does not exist" containerID="892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.521184 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5"} err="failed to get container status \"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5\": rpc error: code = NotFound desc = could not find container \"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5\": container with ID starting with 892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5 not found: ID does not exist" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.521219 4858 scope.go:117] "RemoveContainer" containerID="8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.522135 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f\": container with ID starting with 8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f not found: ID does not exist" containerID="8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.522170 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f"} err="failed to get container status \"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f\": rpc error: code = NotFound desc = could not find container \"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f\": container with ID starting with 8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f not found: ID does not exist" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.522188 4858 scope.go:117] "RemoveContainer" containerID="892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.522482 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5"} err="failed to get container status \"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5\": rpc error: code = NotFound desc = could not find container \"892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5\": container with ID starting with 892254f75b565c83c37c0dc52e57cfac37e5b66b45c08be0a3d0141c86d28bc5 not found: ID does not exist" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.522499 4858 scope.go:117] "RemoveContainer" containerID="8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.522968 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f"} err="failed to get container status \"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f\": rpc error: code = NotFound desc = could not find container \"8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f\": container with ID starting with 8d03e0cb268f4b6f06becc0297a1cf7006cd9d15a4ee4721954230cc83969b0f not found: ID does not exist" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.550729 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.550760 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxxm\" (UniqueName: \"kubernetes.io/projected/ec3a579a-85b4-4fd9-814c-4355ed8813b4-kube-api-access-9nxxm\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.550769 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.610358 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec3a579a-85b4-4fd9-814c-4355ed8813b4" (UID: "ec3a579a-85b4-4fd9-814c-4355ed8813b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.652842 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.691124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data" (OuterVolumeSpecName: "config-data") pod "ec3a579a-85b4-4fd9-814c-4355ed8813b4" (UID: "ec3a579a-85b4-4fd9-814c-4355ed8813b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.754963 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3a579a-85b4-4fd9-814c-4355ed8813b4-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.758948 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.772780 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.796559 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.796969 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="dnsmasq-dns" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797000 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="dnsmasq-dns" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.797016 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="init" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797022 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="init" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.797029 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="cinder-scheduler" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797035 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="cinder-scheduler" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.797052 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="probe" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797057 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="probe" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.797070 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-api" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797076 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-api" Dec 05 14:16:34 crc kubenswrapper[4858]: E1205 14:16:34.797086 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-httpd" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797092 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-httpd" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797274 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="probe" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797285 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-httpd" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797295 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="dnsmasq-dns" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797301 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" containerName="cinder-scheduler" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.797313 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08a4143-92f7-4cc4-a600-a5449137a190" containerName="neutron-api" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.798201 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.810673 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.811695 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.856556 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.856647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.856673 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.856713 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.856787 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpqt\" (UniqueName: \"kubernetes.io/projected/eaf87b37-d86c-4788-9768-2b3abf22f309-kube-api-access-shpqt\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.856813 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf87b37-d86c-4788-9768-2b3abf22f309-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.958570 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf87b37-d86c-4788-9768-2b3abf22f309-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.958658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eaf87b37-d86c-4788-9768-2b3abf22f309-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.958801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.958955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.958994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.959077 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.959164 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpqt\" (UniqueName: \"kubernetes.io/projected/eaf87b37-d86c-4788-9768-2b3abf22f309-kube-api-access-shpqt\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.961968 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.962879 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.964039 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-config-data\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.965552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eaf87b37-d86c-4788-9768-2b3abf22f309-scripts\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:34 crc kubenswrapper[4858]: I1205 14:16:34.979173 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpqt\" (UniqueName: \"kubernetes.io/projected/eaf87b37-d86c-4788-9768-2b3abf22f309-kube-api-access-shpqt\") pod \"cinder-scheduler-0\" (UID: \"eaf87b37-d86c-4788-9768-2b3abf22f309\") " pod="openstack/cinder-scheduler-0" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.123760 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.430922 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.467755 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.467737035 podStartE2EDuration="5.467737035s" podCreationTimestamp="2025-12-05 14:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:35.462213654 +0000 UTC m=+1204.009811793" watchObservedRunningTime="2025-12-05 14:16:35.467737035 +0000 UTC m=+1204.015335174" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.640812 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.779888 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.781623 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.790693 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.790967 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-vv9dv" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.795253 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.810164 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.933165 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3a579a-85b4-4fd9-814c-4355ed8813b4" path="/var/lib/kubelet/pods/ec3a579a-85b4-4fd9-814c-4355ed8813b4/volumes" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.990743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.990815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f55qs\" (UniqueName: \"kubernetes.io/projected/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-kube-api-access-f55qs\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.990894 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:35 crc kubenswrapper[4858]: I1205 14:16:35.991045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.093176 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.093277 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.093309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f55qs\" (UniqueName: \"kubernetes.io/projected/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-kube-api-access-f55qs\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.093347 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.096478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.097366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.105300 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config-secret\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.118415 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f55qs\" (UniqueName: \"kubernetes.io/projected/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-kube-api-access-f55qs\") pod \"openstackclient\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.125202 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.171296 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.182173 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.212864 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.214164 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.220458 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:36 crc kubenswrapper[4858]: E1205 14:16:36.331012 4858 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 05 14:16:36 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced_0(08939753638eb6944084f4b9b3c673cb8afa487fb38d2f6c1c5a4872fd856572): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"08939753638eb6944084f4b9b3c673cb8afa487fb38d2f6c1c5a4872fd856572" Netns:"/var/run/netns/57b09145-bfa2-4a07-9a08-93abc2d33057" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=08939753638eb6944084f4b9b3c673cb8afa487fb38d2f6c1c5a4872fd856572;K8S_POD_UID=a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced]: expected pod UID "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" but got "5460d83c-8be9-4dad-b13d-aa6ea71b31cd" from Kube API Dec 05 14:16:36 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:16:36 crc kubenswrapper[4858]: > Dec 05 14:16:36 crc kubenswrapper[4858]: E1205 14:16:36.331083 4858 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 05 14:16:36 crc kubenswrapper[4858]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced_0(08939753638eb6944084f4b9b3c673cb8afa487fb38d2f6c1c5a4872fd856572): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"08939753638eb6944084f4b9b3c673cb8afa487fb38d2f6c1c5a4872fd856572" Netns:"/var/run/netns/57b09145-bfa2-4a07-9a08-93abc2d33057" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=08939753638eb6944084f4b9b3c673cb8afa487fb38d2f6c1c5a4872fd856572;K8S_POD_UID=a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced]: expected pod UID "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" but got "5460d83c-8be9-4dad-b13d-aa6ea71b31cd" from Kube API Dec 05 14:16:36 crc kubenswrapper[4858]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 05 14:16:36 crc kubenswrapper[4858]: > pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.401037 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.401358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-openstack-config\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.401411 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56tq\" (UniqueName: \"kubernetes.io/projected/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-kube-api-access-z56tq\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.401659 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-openstack-config-secret\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.436514 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf87b37-d86c-4788-9768-2b3abf22f309","Type":"ContainerStarted","Data":"fb2239e0be46b62f6a37e1af06ad1be7c6a7d227ecd915d3130ee2eda111e0c1"} Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.436538 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.440017 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" podUID="5460d83c-8be9-4dad-b13d-aa6ea71b31cd" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.444910 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.503113 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-openstack-config-secret\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.503190 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.503241 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-openstack-config\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.503285 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56tq\" (UniqueName: \"kubernetes.io/projected/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-kube-api-access-z56tq\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.505653 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-openstack-config\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.511431 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.525864 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-openstack-config-secret\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.546579 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56tq\" (UniqueName: \"kubernetes.io/projected/5460d83c-8be9-4dad-b13d-aa6ea71b31cd-kube-api-access-z56tq\") pod \"openstackclient\" (UID: \"5460d83c-8be9-4dad-b13d-aa6ea71b31cd\") " pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.553314 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.604723 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-combined-ca-bundle\") pod \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.604789 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config\") pod \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.604851 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f55qs\" (UniqueName: \"kubernetes.io/projected/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-kube-api-access-f55qs\") pod \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.604981 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config-secret\") pod \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\" (UID: \"a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced\") " Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.608074 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" (UID: "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.612192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" (UID: "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.612320 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" (UID: "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.618145 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-kube-api-access-f55qs" (OuterVolumeSpecName: "kube-api-access-f55qs") pod "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" (UID: "a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced"). InnerVolumeSpecName "kube-api-access-f55qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.710368 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.710632 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.710644 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:36 crc kubenswrapper[4858]: I1205 14:16:36.710652 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f55qs\" (UniqueName: \"kubernetes.io/projected/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced-kube-api-access-f55qs\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.102304 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.126129 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-697d8bbbf9-dvsmf" podUID="545af5cd-079a-4dab-a389-163d5560a8f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.161:5353: i/o timeout" Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.449960 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5460d83c-8be9-4dad-b13d-aa6ea71b31cd","Type":"ContainerStarted","Data":"1b5f1cd5c299c332c371410ac6f417cd614e7be84266a81518e6e65744734581"} Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.455880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf87b37-d86c-4788-9768-2b3abf22f309","Type":"ContainerStarted","Data":"c2a3d0b18af55c64536a1263594144d468172d0f5a91c17c2f4b8ff4a3f7b2bf"} Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.455929 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf87b37-d86c-4788-9768-2b3abf22f309","Type":"ContainerStarted","Data":"c70f033307a7acc48406cec5a46e0d47d1b962b438e8904299dd296e5ca8b9fd"} Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.455895 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.475207 4858 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" podUID="5460d83c-8be9-4dad-b13d-aa6ea71b31cd" Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.480920 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.480901481 podStartE2EDuration="3.480901481s" podCreationTimestamp="2025-12-05 14:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:37.472159343 +0000 UTC m=+1206.019757482" watchObservedRunningTime="2025-12-05 14:16:37.480901481 +0000 UTC m=+1206.028499620" Dec 05 14:16:37 crc kubenswrapper[4858]: I1205 14:16:37.909117 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced" path="/var/lib/kubelet/pods/a9eab2ff-7a1d-4fd9-9e73-8aeb9948eced/volumes" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.125249 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.529319 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-fdcff888c-psnlc"] Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.531713 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.534493 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.534964 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.538610 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.569743 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-fdcff888c-psnlc"] Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.687562 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-public-tls-certs\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.687902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-log-httpd\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.687972 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rt4\" (UniqueName: \"kubernetes.io/projected/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-kube-api-access-j5rt4\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.687991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-combined-ca-bundle\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.688045 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-run-httpd\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.688080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-config-data\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.688094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-internal-tls-certs\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.688114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-etc-swift\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.789850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-public-tls-certs\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.789935 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-log-httpd\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.789990 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5rt4\" (UniqueName: \"kubernetes.io/projected/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-kube-api-access-j5rt4\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.790019 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-combined-ca-bundle\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.790087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-run-httpd\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.790137 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-internal-tls-certs\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.790162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-config-data\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.790191 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-etc-swift\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.791325 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-run-httpd\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.791424 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-log-httpd\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.801536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-public-tls-certs\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.806307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-combined-ca-bundle\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.807930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-internal-tls-certs\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.814547 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-config-data\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.814571 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-etc-swift\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.828980 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5rt4\" (UniqueName: \"kubernetes.io/projected/3ab446a1-c4b7-40c6-879b-f0f90f4b8559-kube-api-access-j5rt4\") pod \"swift-proxy-fdcff888c-psnlc\" (UID: \"3ab446a1-c4b7-40c6-879b-f0f90f4b8559\") " pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:40 crc kubenswrapper[4858]: I1205 14:16:40.849889 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:41 crc kubenswrapper[4858]: I1205 14:16:41.701006 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-fdcff888c-psnlc"] Dec 05 14:16:42 crc kubenswrapper[4858]: I1205 14:16:42.529607 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fdcff888c-psnlc" event={"ID":"3ab446a1-c4b7-40c6-879b-f0f90f4b8559","Type":"ContainerStarted","Data":"0d4de3e7a81a8686e28fe2390a8797d9356d7f34a880dc51face13e2509603be"} Dec 05 14:16:42 crc kubenswrapper[4858]: I1205 14:16:42.530499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fdcff888c-psnlc" event={"ID":"3ab446a1-c4b7-40c6-879b-f0f90f4b8559","Type":"ContainerStarted","Data":"3a81e4e264fa946cc8476963c83d8b2d1daf6ff73f1ef998bcd61ab15e609043"} Dec 05 14:16:43 crc kubenswrapper[4858]: I1205 14:16:43.541052 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fdcff888c-psnlc" event={"ID":"3ab446a1-c4b7-40c6-879b-f0f90f4b8559","Type":"ContainerStarted","Data":"2d5fc9a7c796bb93daf73e0cf77607ede2e52f7ef83224db21cdc015064e4065"} Dec 05 14:16:43 crc kubenswrapper[4858]: I1205 14:16:43.541370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:43 crc kubenswrapper[4858]: I1205 14:16:43.541395 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:43 crc kubenswrapper[4858]: I1205 14:16:43.567717 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-fdcff888c-psnlc" podStartSLOduration=3.56769842 podStartE2EDuration="3.56769842s" podCreationTimestamp="2025-12-05 14:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:43.560262577 +0000 UTC m=+1212.107860716" watchObservedRunningTime="2025-12-05 14:16:43.56769842 +0000 UTC m=+1212.115296559" Dec 05 14:16:43 crc kubenswrapper[4858]: I1205 14:16:43.803356 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 14:16:44 crc kubenswrapper[4858]: I1205 14:16:44.759708 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:16:44 crc kubenswrapper[4858]: I1205 14:16:44.760058 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:16:45 crc kubenswrapper[4858]: I1205 14:16:45.434021 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 14:16:47 crc kubenswrapper[4858]: I1205 14:16:47.386771 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:47 crc kubenswrapper[4858]: I1205 14:16:47.387333 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-central-agent" containerID="cri-o://9afd2a9263eaa12e338eaed0d7523b4d4d9e4906319bc150597d4a5689702469" gracePeriod=30 Dec 05 14:16:47 crc kubenswrapper[4858]: I1205 14:16:47.387689 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="proxy-httpd" containerID="cri-o://e18c24c63613c3944b010ae25bc3045b70b75167e6d86333d69e9ec0e391033a" gracePeriod=30 Dec 05 14:16:47 crc kubenswrapper[4858]: I1205 14:16:47.387711 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-notification-agent" containerID="cri-o://db61723d5246cfd2cd8bb1ec41a822bbd139257db4d6bb6d8fbfa929f8911725" gracePeriod=30 Dec 05 14:16:47 crc kubenswrapper[4858]: I1205 14:16:47.387797 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="sg-core" containerID="cri-o://9982f8bdcdb41c75025d4eb2256f597ef4c8ba6b068c241390274b068c743a46" gracePeriod=30 Dec 05 14:16:47 crc kubenswrapper[4858]: I1205 14:16:47.408070 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": EOF" Dec 05 14:16:48 crc kubenswrapper[4858]: I1205 14:16:48.606334 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerID="9982f8bdcdb41c75025d4eb2256f597ef4c8ba6b068c241390274b068c743a46" exitCode=2 Dec 05 14:16:48 crc kubenswrapper[4858]: I1205 14:16:48.606361 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerID="9afd2a9263eaa12e338eaed0d7523b4d4d9e4906319bc150597d4a5689702469" exitCode=0 Dec 05 14:16:48 crc kubenswrapper[4858]: I1205 14:16:48.606381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerDied","Data":"9982f8bdcdb41c75025d4eb2256f597ef4c8ba6b068c241390274b068c743a46"} Dec 05 14:16:48 crc kubenswrapper[4858]: I1205 14:16:48.606404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerDied","Data":"9afd2a9263eaa12e338eaed0d7523b4d4d9e4906319bc150597d4a5689702469"} Dec 05 14:16:48 crc kubenswrapper[4858]: E1205 14:16:48.919482 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d379509_8e2b_4f37_b08b_f8dc06c98ee8.slice/crio-e18c24c63613c3944b010ae25bc3045b70b75167e6d86333d69e9ec0e391033a.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:16:49 crc kubenswrapper[4858]: I1205 14:16:49.625638 4858 generic.go:334] "Generic (PLEG): container finished" podID="f9929d39-1191-4732-a51f-16d2f973bf90" containerID="22d0a7a46fc3ae2b7828a4d3f6f59aa262bf8ed16cb09331868098f002150ec0" exitCode=137 Dec 05 14:16:49 crc kubenswrapper[4858]: I1205 14:16:49.625718 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fb787db8-jqwt8" event={"ID":"f9929d39-1191-4732-a51f-16d2f973bf90","Type":"ContainerDied","Data":"22d0a7a46fc3ae2b7828a4d3f6f59aa262bf8ed16cb09331868098f002150ec0"} Dec 05 14:16:49 crc kubenswrapper[4858]: I1205 14:16:49.628256 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerID="61f7dd3bef7baaad01301f499bc946fc6b7f67a00416e4a5dc1f0bf9d190b0df" exitCode=137 Dec 05 14:16:49 crc kubenswrapper[4858]: I1205 14:16:49.628295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerDied","Data":"61f7dd3bef7baaad01301f499bc946fc6b7f67a00416e4a5dc1f0bf9d190b0df"} Dec 05 14:16:49 crc kubenswrapper[4858]: I1205 14:16:49.630636 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerID="e18c24c63613c3944b010ae25bc3045b70b75167e6d86333d69e9ec0e391033a" exitCode=0 Dec 05 14:16:49 crc kubenswrapper[4858]: I1205 14:16:49.630658 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerDied","Data":"e18c24c63613c3944b010ae25bc3045b70b75167e6d86333d69e9ec0e391033a"} Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.081023 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7c5f557b4c-fdhxg"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.082954 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.087022 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.087219 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-kkl74" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.107037 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.121426 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7c5f557b4c-fdhxg"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.180383 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7974d785f8-5hhw6"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.183554 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.190816 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.206010 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.206114 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-combined-ca-bundle\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.206167 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data-custom\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.206202 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8hg\" (UniqueName: \"kubernetes.io/projected/b958f7a4-1b99-4ce8-badb-52855609ec9d-kube-api-access-vr8hg\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.223950 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7974d785f8-5hhw6"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.267701 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c795fd55-4cmqs"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.269406 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.304180 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c795fd55-4cmqs"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.307360 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data-custom\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.307529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.307617 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-combined-ca-bundle\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.307689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmpqh\" (UniqueName: \"kubernetes.io/projected/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-kube-api-access-gmpqh\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.307894 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-combined-ca-bundle\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.308730 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data-custom\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.308855 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8hg\" (UniqueName: \"kubernetes.io/projected/b958f7a4-1b99-4ce8-badb-52855609ec9d-kube-api-access-vr8hg\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.309083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.327645 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.333072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data-custom\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.334226 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-combined-ca-bundle\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.338716 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8hg\" (UniqueName: \"kubernetes.io/projected/b958f7a4-1b99-4ce8-badb-52855609ec9d-kube-api-access-vr8hg\") pod \"heat-engine-7c5f557b4c-fdhxg\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.411704 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d6856d7d8-ln5hc"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-svc\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416178 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416201 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-sb\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwhg\" (UniqueName: \"kubernetes.io/projected/0051a952-b753-48c8-af95-52ca1cd543b8-kube-api-access-jtwhg\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416266 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-nb\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416329 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data-custom\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416353 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-config\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416410 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-combined-ca-bundle\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416429 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmpqh\" (UniqueName: \"kubernetes.io/projected/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-kube-api-access-gmpqh\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.416807 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.420237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.421510 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.423720 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d6856d7d8-ln5hc"] Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.425453 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data-custom\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.426983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.431076 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-combined-ca-bundle\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.457571 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmpqh\" (UniqueName: \"kubernetes.io/projected/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-kube-api-access-gmpqh\") pod \"heat-cfnapi-7974d785f8-5hhw6\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.530782 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532171 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-config\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-combined-ca-bundle\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532275 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-svc\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532305 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data-custom\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srnv2\" (UniqueName: \"kubernetes.io/projected/a0b76ef1-2ed0-4844-bb75-adafdc72e742-kube-api-access-srnv2\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532385 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532414 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-sb\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532436 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtwhg\" (UniqueName: \"kubernetes.io/projected/0051a952-b753-48c8-af95-52ca1cd543b8-kube-api-access-jtwhg\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532484 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.532534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-nb\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.536064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.536809 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-sb\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.538987 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-config\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.541031 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-nb\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.541170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-svc\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.576086 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtwhg\" (UniqueName: \"kubernetes.io/projected/0051a952-b753-48c8-af95-52ca1cd543b8-kube-api-access-jtwhg\") pod \"dnsmasq-dns-6c795fd55-4cmqs\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.592966 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="048ced77-bd4f-48c2-90f3-13081773f309" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.607012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.634852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data-custom\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.634922 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srnv2\" (UniqueName: \"kubernetes.io/projected/a0b76ef1-2ed0-4844-bb75-adafdc72e742-kube-api-access-srnv2\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.634972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.635054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-combined-ca-bundle\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.641249 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-combined-ca-bundle\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.641572 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.643525 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data-custom\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.660139 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srnv2\" (UniqueName: \"kubernetes.io/projected/a0b76ef1-2ed0-4844-bb75-adafdc72e742-kube-api-access-srnv2\") pod \"heat-api-6d6856d7d8-ln5hc\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.825079 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.862363 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:50 crc kubenswrapper[4858]: I1205 14:16:50.864169 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-fdcff888c-psnlc" Dec 05 14:16:51 crc kubenswrapper[4858]: I1205 14:16:51.785177 4858 generic.go:334] "Generic (PLEG): container finished" podID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerID="db61723d5246cfd2cd8bb1ec41a822bbd139257db4d6bb6d8fbfa929f8911725" exitCode=0 Dec 05 14:16:51 crc kubenswrapper[4858]: I1205 14:16:51.785815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerDied","Data":"db61723d5246cfd2cd8bb1ec41a822bbd139257db4d6bb6d8fbfa929f8911725"} Dec 05 14:16:51 crc kubenswrapper[4858]: W1205 14:16:51.977315 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0051a952_b753_48c8_af95_52ca1cd543b8.slice/crio-68aa07234f1b36f7595a7d5c562e0428ed95b7fe99f5355d9d93964763fb6600 WatchSource:0}: Error finding container 68aa07234f1b36f7595a7d5c562e0428ed95b7fe99f5355d9d93964763fb6600: Status 404 returned error can't find the container with id 68aa07234f1b36f7595a7d5c562e0428ed95b7fe99f5355d9d93964763fb6600 Dec 05 14:16:51 crc kubenswrapper[4858]: I1205 14:16:51.980993 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c795fd55-4cmqs"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.015424 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.016910 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7974d785f8-5hhw6"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.225040 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d6856d7d8-ln5hc"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.259116 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373092 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-log-httpd\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373216 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7w5l\" (UniqueName: \"kubernetes.io/projected/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-kube-api-access-b7w5l\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373376 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-combined-ca-bundle\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-config-data\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-scripts\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-run-httpd\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.373784 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-sg-core-conf-yaml\") pod \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\" (UID: \"0d379509-8e2b-4f37-b08b-f8dc06c98ee8\") " Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.378329 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.383675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-kube-api-access-b7w5l" (OuterVolumeSpecName: "kube-api-access-b7w5l") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "kube-api-access-b7w5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.388034 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.400622 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-scripts" (OuterVolumeSpecName: "scripts") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: W1205 14:16:52.414867 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb958f7a4_1b99_4ce8_badb_52855609ec9d.slice/crio-58e362b1744068c52c916a61dcecf2031f276f25be9981c0908c42cbc8bff860 WatchSource:0}: Error finding container 58e362b1744068c52c916a61dcecf2031f276f25be9981c0908c42cbc8bff860: Status 404 returned error can't find the container with id 58e362b1744068c52c916a61dcecf2031f276f25be9981c0908c42cbc8bff860 Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.416798 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.421998 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7c5f557b4c-fdhxg"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.482968 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.483110 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.483169 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.483226 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.483279 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7w5l\" (UniqueName: \"kubernetes.io/projected/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-kube-api-access-b7w5l\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.494667 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.542988 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-config-data" (OuterVolumeSpecName: "config-data") pod "0d379509-8e2b-4f37-b08b-f8dc06c98ee8" (UID: "0d379509-8e2b-4f37-b08b-f8dc06c98ee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.584615 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.584980 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d379509-8e2b-4f37-b08b-f8dc06c98ee8-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.809277 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d379509-8e2b-4f37-b08b-f8dc06c98ee8","Type":"ContainerDied","Data":"97db944343d91803e59b65e0b477c6235552f1c960f04d6525bcbc2275267991"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.809324 4858 scope.go:117] "RemoveContainer" containerID="e18c24c63613c3944b010ae25bc3045b70b75167e6d86333d69e9ec0e391033a" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.809327 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.816293 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d6856d7d8-ln5hc" event={"ID":"a0b76ef1-2ed0-4844-bb75-adafdc72e742","Type":"ContainerStarted","Data":"2d77da13dc760ebd2242b1b723d44ffe7424eb3423f85cdc962b3b82da3d1f82"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.826929 4858 generic.go:334] "Generic (PLEG): container finished" podID="0051a952-b753-48c8-af95-52ca1cd543b8" containerID="536f7255db5c1df4f6243f7c48543bd8780cf0a52e2fb4deec18ec21919eae07" exitCode=0 Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.827065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" event={"ID":"0051a952-b753-48c8-af95-52ca1cd543b8","Type":"ContainerDied","Data":"536f7255db5c1df4f6243f7c48543bd8780cf0a52e2fb4deec18ec21919eae07"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.827096 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" event={"ID":"0051a952-b753-48c8-af95-52ca1cd543b8","Type":"ContainerStarted","Data":"68aa07234f1b36f7595a7d5c562e0428ed95b7fe99f5355d9d93964763fb6600"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.837405 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" event={"ID":"0f24dddb-47d0-42be-9ca3-c3b61bd1580a","Type":"ContainerStarted","Data":"40223cb06ac0df265f0c6972aee684bc08ba5f30268352148b881c937707fbc1"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.852221 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fb787db8-jqwt8" event={"ID":"f9929d39-1191-4732-a51f-16d2f973bf90","Type":"ContainerStarted","Data":"3cebc7af57ab0f74154aefa4efe91457edca0cfec174c176ced281edbf3c5180"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.871258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerStarted","Data":"2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.872299 4858 scope.go:117] "RemoveContainer" containerID="9982f8bdcdb41c75025d4eb2256f597ef4c8ba6b068c241390274b068c743a46" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.875285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7c5f557b4c-fdhxg" event={"ID":"b958f7a4-1b99-4ce8-badb-52855609ec9d","Type":"ContainerStarted","Data":"58e362b1744068c52c916a61dcecf2031f276f25be9981c0908c42cbc8bff860"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.881365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5460d83c-8be9-4dad-b13d-aa6ea71b31cd","Type":"ContainerStarted","Data":"3173011bffd7d8bbe58ef01b2c9f3cb61aa6af7ea4c26f7dea3695279a10b246"} Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.905480 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.917075 4858 scope.go:117] "RemoveContainer" containerID="db61723d5246cfd2cd8bb1ec41a822bbd139257db4d6bb6d8fbfa929f8911725" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.920472 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.933325 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:52 crc kubenswrapper[4858]: E1205 14:16:52.934086 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-notification-agent" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934112 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-notification-agent" Dec 05 14:16:52 crc kubenswrapper[4858]: E1205 14:16:52.934127 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="sg-core" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934137 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="sg-core" Dec 05 14:16:52 crc kubenswrapper[4858]: E1205 14:16:52.934157 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="proxy-httpd" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934165 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="proxy-httpd" Dec 05 14:16:52 crc kubenswrapper[4858]: E1205 14:16:52.934198 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-central-agent" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934208 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-central-agent" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934450 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="sg-core" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934472 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-central-agent" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934488 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="ceilometer-notification-agent" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.934517 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" containerName="proxy-httpd" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.938935 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.942346 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.943302 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.991667 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.993147 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.995238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.995344 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-config-data\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.995480 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-log-httpd\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.995570 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-scripts\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.995915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lndw\" (UniqueName: \"kubernetes.io/projected/81d84be8-b4b4-4e29-a94c-fcca489809fb-kube-api-access-7lndw\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:52 crc kubenswrapper[4858]: I1205 14:16:52.996093 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-run-httpd\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.005470 4858 scope.go:117] "RemoveContainer" containerID="9afd2a9263eaa12e338eaed0d7523b4d4d9e4906319bc150597d4a5689702469" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.032133 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.9343881830000003 podStartE2EDuration="17.032113495s" podCreationTimestamp="2025-12-05 14:16:36 +0000 UTC" firstStartedPulling="2025-12-05 14:16:37.141969536 +0000 UTC m=+1205.689567675" lastFinishedPulling="2025-12-05 14:16:51.239694848 +0000 UTC m=+1219.787292987" observedRunningTime="2025-12-05 14:16:52.981986986 +0000 UTC m=+1221.529585145" watchObservedRunningTime="2025-12-05 14:16:53.032113495 +0000 UTC m=+1221.579711634" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098261 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lndw\" (UniqueName: \"kubernetes.io/projected/81d84be8-b4b4-4e29-a94c-fcca489809fb-kube-api-access-7lndw\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098332 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-run-httpd\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098370 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-config-data\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-log-httpd\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.098489 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-scripts\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.099646 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-log-httpd\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.099973 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-run-httpd\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.105156 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-scripts\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.107296 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-config-data\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.108177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.115189 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.117012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lndw\" (UniqueName: \"kubernetes.io/projected/81d84be8-b4b4-4e29-a94c-fcca489809fb-kube-api-access-7lndw\") pod \"ceilometer-0\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.264136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.932370 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d379509-8e2b-4f37-b08b-f8dc06c98ee8" path="/var/lib/kubelet/pods/0d379509-8e2b-4f37-b08b-f8dc06c98ee8/volumes" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.940412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" event={"ID":"0051a952-b753-48c8-af95-52ca1cd543b8","Type":"ContainerStarted","Data":"79d1dce0c31da28553f959cc24caa6e3ee6e664679454989e80cd72f3b43fa6d"} Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.940459 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.955762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7c5f557b4c-fdhxg" event={"ID":"b958f7a4-1b99-4ce8-badb-52855609ec9d","Type":"ContainerStarted","Data":"928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f"} Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.963634 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" podStartSLOduration=3.963617253 podStartE2EDuration="3.963617253s" podCreationTimestamp="2025-12-05 14:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:53.960609321 +0000 UTC m=+1222.508207460" watchObservedRunningTime="2025-12-05 14:16:53.963617253 +0000 UTC m=+1222.511215392" Dec 05 14:16:53 crc kubenswrapper[4858]: I1205 14:16:53.993390 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7c5f557b4c-fdhxg" podStartSLOduration=3.993369025 podStartE2EDuration="3.993369025s" podCreationTimestamp="2025-12-05 14:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:16:53.985433099 +0000 UTC m=+1222.533031238" watchObservedRunningTime="2025-12-05 14:16:53.993369025 +0000 UTC m=+1222.540967164" Dec 05 14:16:55 crc kubenswrapper[4858]: I1205 14:16:55.000094 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:16:55 crc kubenswrapper[4858]: W1205 14:16:55.329923 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81d84be8_b4b4_4e29_a94c_fcca489809fb.slice/crio-cee21aec4f7d30122a10b2d56145059c9ed50792e49a7c987a9cb3f6a3c06ba4 WatchSource:0}: Error finding container cee21aec4f7d30122a10b2d56145059c9ed50792e49a7c987a9cb3f6a3c06ba4: Status 404 returned error can't find the container with id cee21aec4f7d30122a10b2d56145059c9ed50792e49a7c987a9cb3f6a3c06ba4 Dec 05 14:16:55 crc kubenswrapper[4858]: I1205 14:16:55.335483 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.023945 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.027880 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d6856d7d8-ln5hc" event={"ID":"a0b76ef1-2ed0-4844-bb75-adafdc72e742","Type":"ContainerStarted","Data":"d36b6edcf130177e6b1ba93276b0d588277a4fe9d7d2c482ecd20ecf3f54bb18"} Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.028905 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.033671 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" event={"ID":"0f24dddb-47d0-42be-9ca3-c3b61bd1580a","Type":"ContainerStarted","Data":"f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0"} Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.034660 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.036063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerStarted","Data":"195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169"} Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.036172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerStarted","Data":"cee21aec4f7d30122a10b2d56145059c9ed50792e49a7c987a9cb3f6a3c06ba4"} Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.058190 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d6856d7d8-ln5hc" podStartSLOduration=3.499132858 podStartE2EDuration="6.058172791s" podCreationTimestamp="2025-12-05 14:16:50 +0000 UTC" firstStartedPulling="2025-12-05 14:16:52.227621256 +0000 UTC m=+1220.775219395" lastFinishedPulling="2025-12-05 14:16:54.786661189 +0000 UTC m=+1223.334259328" observedRunningTime="2025-12-05 14:16:56.053767331 +0000 UTC m=+1224.601365470" watchObservedRunningTime="2025-12-05 14:16:56.058172791 +0000 UTC m=+1224.605770930" Dec 05 14:16:56 crc kubenswrapper[4858]: I1205 14:16:56.081124 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" podStartSLOduration=3.311543796 podStartE2EDuration="6.081106748s" podCreationTimestamp="2025-12-05 14:16:50 +0000 UTC" firstStartedPulling="2025-12-05 14:16:52.015154174 +0000 UTC m=+1220.562752313" lastFinishedPulling="2025-12-05 14:16:54.784717126 +0000 UTC m=+1223.332315265" observedRunningTime="2025-12-05 14:16:56.078118616 +0000 UTC m=+1224.625716755" watchObservedRunningTime="2025-12-05 14:16:56.081106748 +0000 UTC m=+1224.628704887" Dec 05 14:16:57 crc kubenswrapper[4858]: I1205 14:16:57.046724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerStarted","Data":"32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103"} Dec 05 14:16:57 crc kubenswrapper[4858]: I1205 14:16:57.047021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerStarted","Data":"85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435"} Dec 05 14:16:58 crc kubenswrapper[4858]: I1205 14:16:58.653757 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:16:58 crc kubenswrapper[4858]: I1205 14:16:58.654967 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:16:58 crc kubenswrapper[4858]: I1205 14:16:58.959725 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:16:58 crc kubenswrapper[4858]: I1205 14:16:58.959778 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.537521 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-69596c746f-2zqwj"] Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.539372 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.547183 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5648884998-l7brp"] Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.548424 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.560705 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69596c746f-2zqwj"] Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.578171 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5648884998-l7brp"] Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658076 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bkl\" (UniqueName: \"kubernetes.io/projected/7c788136-a2e0-462f-bda7-a49478e425c7-kube-api-access-j7bkl\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658119 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-combined-ca-bundle\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658159 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data-custom\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658209 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data-custom\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658234 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-combined-ca-bundle\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.658314 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpllv\" (UniqueName: \"kubernetes.io/projected/4a64adf2-e7c1-41a6-9e42-ec919354ad16-kube-api-access-vpllv\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.760320 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7bkl\" (UniqueName: \"kubernetes.io/projected/7c788136-a2e0-462f-bda7-a49478e425c7-kube-api-access-j7bkl\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.760380 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-combined-ca-bundle\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.760397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data-custom\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.760505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data-custom\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.760553 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.760584 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.761289 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-combined-ca-bundle\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.761316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpllv\" (UniqueName: \"kubernetes.io/projected/4a64adf2-e7c1-41a6-9e42-ec919354ad16-kube-api-access-vpllv\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.768657 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-combined-ca-bundle\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.769033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data-custom\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.770491 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.774527 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data-custom\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.778707 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.782843 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-combined-ca-bundle\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.790196 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpllv\" (UniqueName: \"kubernetes.io/projected/4a64adf2-e7c1-41a6-9e42-ec919354ad16-kube-api-access-vpllv\") pod \"heat-api-69596c746f-2zqwj\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.794746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7bkl\" (UniqueName: \"kubernetes.io/projected/7c788136-a2e0-462f-bda7-a49478e425c7-kube-api-access-j7bkl\") pod \"heat-cfnapi-5648884998-l7brp\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.863852 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:16:59 crc kubenswrapper[4858]: I1205 14:16:59.870901 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.093871 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-64fbdd66f9-gghv6"] Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.095595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.116878 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64fbdd66f9-gghv6"] Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.174675 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-config-data\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.174776 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-combined-ca-bundle\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.174835 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-config-data-custom\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.174872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfccr\" (UniqueName: \"kubernetes.io/projected/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-kube-api-access-zfccr\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.278024 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-config-data\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.279476 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-combined-ca-bundle\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.279621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-config-data-custom\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.279761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfccr\" (UniqueName: \"kubernetes.io/projected/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-kube-api-access-zfccr\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.285075 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-combined-ca-bundle\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.285680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-config-data-custom\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.288965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-config-data\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.302472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfccr\" (UniqueName: \"kubernetes.io/projected/5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f-kube-api-access-zfccr\") pod \"heat-engine-64fbdd66f9-gghv6\" (UID: \"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f\") " pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.568566 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.612033 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.704184 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b99cfc7-qvxwd"] Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.704429 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="dnsmasq-dns" containerID="cri-o://1814f53c081a8a90103d71f6d46b6aa20251d09ab30a4bd3bf4dfed963a9c251" gracePeriod=10 Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.901050 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5648884998-l7brp"] Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.922791 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.923087 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-log" containerID="cri-o://616ce3032bc1ee0e159ff6f2cae66db2cfe77778cf3071963e54c7fd762839f9" gracePeriod=30 Dec 05 14:17:00 crc kubenswrapper[4858]: I1205 14:17:00.923591 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-httpd" containerID="cri-o://1259a84d33db1d12af20110a104f0023afa884db9b90923a2a0b1f0fa334f35a" gracePeriod=30 Dec 05 14:17:00 crc kubenswrapper[4858]: W1205 14:17:00.978260 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c788136_a2e0_462f_bda7_a49478e425c7.slice/crio-d3d857b849c893b983a005ee088f7679b823a8ee09ca3b5c05dd5e71d868acb0 WatchSource:0}: Error finding container d3d857b849c893b983a005ee088f7679b823a8ee09ca3b5c05dd5e71d868acb0: Status 404 returned error can't find the container with id d3d857b849c893b983a005ee088f7679b823a8ee09ca3b5c05dd5e71d868acb0 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.195717 4858 generic.go:334] "Generic (PLEG): container finished" podID="53dddd76-03ec-457c-b202-4a181872ea4e" containerID="1814f53c081a8a90103d71f6d46b6aa20251d09ab30a4bd3bf4dfed963a9c251" exitCode=0 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.196088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" event={"ID":"53dddd76-03ec-457c-b202-4a181872ea4e","Type":"ContainerDied","Data":"1814f53c081a8a90103d71f6d46b6aa20251d09ab30a4bd3bf4dfed963a9c251"} Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.200122 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerStarted","Data":"dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834"} Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.200284 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-central-agent" containerID="cri-o://195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169" gracePeriod=30 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.200546 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.200786 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="proxy-httpd" containerID="cri-o://dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834" gracePeriod=30 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.200869 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="sg-core" containerID="cri-o://32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103" gracePeriod=30 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.200913 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-notification-agent" containerID="cri-o://85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435" gracePeriod=30 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.220080 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5648884998-l7brp" event={"ID":"7c788136-a2e0-462f-bda7-a49478e425c7","Type":"ContainerStarted","Data":"d3d857b849c893b983a005ee088f7679b823a8ee09ca3b5c05dd5e71d868acb0"} Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.230368 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.960041609 podStartE2EDuration="9.230348714s" podCreationTimestamp="2025-12-05 14:16:52 +0000 UTC" firstStartedPulling="2025-12-05 14:16:55.332174406 +0000 UTC m=+1223.879772545" lastFinishedPulling="2025-12-05 14:16:58.602481511 +0000 UTC m=+1227.150079650" observedRunningTime="2025-12-05 14:17:01.227597198 +0000 UTC m=+1229.775195337" watchObservedRunningTime="2025-12-05 14:17:01.230348714 +0000 UTC m=+1229.777946853" Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.231749 4858 generic.go:334] "Generic (PLEG): container finished" podID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerID="616ce3032bc1ee0e159ff6f2cae66db2cfe77778cf3071963e54c7fd762839f9" exitCode=143 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.231785 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d51a537e-24d5-4083-8c7a-8e7abd0abd49","Type":"ContainerDied","Data":"616ce3032bc1ee0e159ff6f2cae66db2cfe77778cf3071963e54c7fd762839f9"} Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.442466 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69596c746f-2zqwj"] Dec 05 14:17:01 crc kubenswrapper[4858]: W1205 14:17:01.462734 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a64adf2_e7c1_41a6_9e42_ec919354ad16.slice/crio-1303ee0a740b2a3e48cb39c41ed67c16b6030f6ca982df5809a2a90f74183fb1 WatchSource:0}: Error finding container 1303ee0a740b2a3e48cb39c41ed67c16b6030f6ca982df5809a2a90f74183fb1: Status 404 returned error can't find the container with id 1303ee0a740b2a3e48cb39c41ed67c16b6030f6ca982df5809a2a90f74183fb1 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.614601 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64fbdd66f9-gghv6"] Dec 05 14:17:01 crc kubenswrapper[4858]: W1205 14:17:01.679978 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a1716a7_0892_4e4c_8ef3_0b28ebd0a05f.slice/crio-9416bcf6dcb56e2e336c73442f926b908f63ee64653227686a2f528777b0ac50 WatchSource:0}: Error finding container 9416bcf6dcb56e2e336c73442f926b908f63ee64653227686a2f528777b0ac50: Status 404 returned error can't find the container with id 9416bcf6dcb56e2e336c73442f926b908f63ee64653227686a2f528777b0ac50 Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.746958 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.828550 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-nb\") pod \"53dddd76-03ec-457c-b202-4a181872ea4e\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.828592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-svc\") pod \"53dddd76-03ec-457c-b202-4a181872ea4e\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.828673 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndtvc\" (UniqueName: \"kubernetes.io/projected/53dddd76-03ec-457c-b202-4a181872ea4e-kube-api-access-ndtvc\") pod \"53dddd76-03ec-457c-b202-4a181872ea4e\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.828710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-config\") pod \"53dddd76-03ec-457c-b202-4a181872ea4e\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.828738 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-swift-storage-0\") pod \"53dddd76-03ec-457c-b202-4a181872ea4e\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.828807 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-sb\") pod \"53dddd76-03ec-457c-b202-4a181872ea4e\" (UID: \"53dddd76-03ec-457c-b202-4a181872ea4e\") " Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.877060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53dddd76-03ec-457c-b202-4a181872ea4e-kube-api-access-ndtvc" (OuterVolumeSpecName: "kube-api-access-ndtvc") pod "53dddd76-03ec-457c-b202-4a181872ea4e" (UID: "53dddd76-03ec-457c-b202-4a181872ea4e"). InnerVolumeSpecName "kube-api-access-ndtvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:01 crc kubenswrapper[4858]: I1205 14:17:01.944668 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndtvc\" (UniqueName: \"kubernetes.io/projected/53dddd76-03ec-457c-b202-4a181872ea4e-kube-api-access-ndtvc\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.264162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "53dddd76-03ec-457c-b202-4a181872ea4e" (UID: "53dddd76-03ec-457c-b202-4a181872ea4e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.293752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5648884998-l7brp" event={"ID":"7c788136-a2e0-462f-bda7-a49478e425c7","Type":"ContainerStarted","Data":"b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19"} Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.294549 4858 scope.go:117] "RemoveContainer" containerID="b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.327020 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-config" (OuterVolumeSpecName: "config") pod "53dddd76-03ec-457c-b202-4a181872ea4e" (UID: "53dddd76-03ec-457c-b202-4a181872ea4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.331189 4858 generic.go:334] "Generic (PLEG): container finished" podID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerID="32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103" exitCode=2 Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.331218 4858 generic.go:334] "Generic (PLEG): container finished" podID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerID="85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435" exitCode=0 Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.331280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerDied","Data":"32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103"} Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.331306 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerDied","Data":"85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435"} Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.354630 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69596c746f-2zqwj" event={"ID":"4a64adf2-e7c1-41a6-9e42-ec919354ad16","Type":"ContainerStarted","Data":"1303ee0a740b2a3e48cb39c41ed67c16b6030f6ca982df5809a2a90f74183fb1"} Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.363064 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.363089 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.399698 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" event={"ID":"53dddd76-03ec-457c-b202-4a181872ea4e","Type":"ContainerDied","Data":"d3bc17a4006225723184d21774e53bb25f7bd6b75e0bc00e11e7eb980ae31da9"} Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.399762 4858 scope.go:117] "RemoveContainer" containerID="1814f53c081a8a90103d71f6d46b6aa20251d09ab30a4bd3bf4dfed963a9c251" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.400436 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.411854 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64fbdd66f9-gghv6" event={"ID":"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f","Type":"ContainerStarted","Data":"9416bcf6dcb56e2e336c73442f926b908f63ee64653227686a2f528777b0ac50"} Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.440587 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "53dddd76-03ec-457c-b202-4a181872ea4e" (UID: "53dddd76-03ec-457c-b202-4a181872ea4e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.467607 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.485858 4858 scope.go:117] "RemoveContainer" containerID="a55d7333332722b518378055f8df3a923450c64c9664feb818658234a1a1ece8" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.583865 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "53dddd76-03ec-457c-b202-4a181872ea4e" (UID: "53dddd76-03ec-457c-b202-4a181872ea4e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.590183 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "53dddd76-03ec-457c-b202-4a181872ea4e" (UID: "53dddd76-03ec-457c-b202-4a181872ea4e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.675179 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.675227 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/53dddd76-03ec-457c-b202-4a181872ea4e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.766223 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b99cfc7-qvxwd"] Dec 05 14:17:02 crc kubenswrapper[4858]: I1205 14:17:02.783129 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b99cfc7-qvxwd"] Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.421134 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64fbdd66f9-gghv6" event={"ID":"5a1716a7-0892-4e4c-8ef3-0b28ebd0a05f","Type":"ContainerStarted","Data":"ac2b24a14c327bf5f7d1fb0bef7affb32bd103bf2cf28a79a31119e5ba693da0"} Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.422710 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.424537 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c788136-a2e0-462f-bda7-a49478e425c7" containerID="b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19" exitCode=1 Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.424560 4858 generic.go:334] "Generic (PLEG): container finished" podID="7c788136-a2e0-462f-bda7-a49478e425c7" containerID="36b6bd66da07e63852aa7c604e412eb40ad1114849b374fb88b1452dfd7ec797" exitCode=1 Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.424926 4858 scope.go:117] "RemoveContainer" containerID="36b6bd66da07e63852aa7c604e412eb40ad1114849b374fb88b1452dfd7ec797" Dec 05 14:17:03 crc kubenswrapper[4858]: E1205 14:17:03.425138 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5648884998-l7brp_openstack(7c788136-a2e0-462f-bda7-a49478e425c7)\"" pod="openstack/heat-cfnapi-5648884998-l7brp" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.425322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5648884998-l7brp" event={"ID":"7c788136-a2e0-462f-bda7-a49478e425c7","Type":"ContainerDied","Data":"b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19"} Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.425350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5648884998-l7brp" event={"ID":"7c788136-a2e0-462f-bda7-a49478e425c7","Type":"ContainerDied","Data":"36b6bd66da07e63852aa7c604e412eb40ad1114849b374fb88b1452dfd7ec797"} Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.425367 4858 scope.go:117] "RemoveContainer" containerID="b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.433703 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerID="f36b4c3a5bfd113b428f2f4e351acd7a595ca857cd09659edcf884d2b4eac6ee" exitCode=1 Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.433770 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69596c746f-2zqwj" event={"ID":"4a64adf2-e7c1-41a6-9e42-ec919354ad16","Type":"ContainerDied","Data":"f36b4c3a5bfd113b428f2f4e351acd7a595ca857cd09659edcf884d2b4eac6ee"} Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.434392 4858 scope.go:117] "RemoveContainer" containerID="f36b4c3a5bfd113b428f2f4e351acd7a595ca857cd09659edcf884d2b4eac6ee" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.452389 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-64fbdd66f9-gghv6" podStartSLOduration=3.452373843 podStartE2EDuration="3.452373843s" podCreationTimestamp="2025-12-05 14:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:03.449574256 +0000 UTC m=+1231.997172395" watchObservedRunningTime="2025-12-05 14:17:03.452373843 +0000 UTC m=+1231.999971982" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.796016 4858 scope.go:117] "RemoveContainer" containerID="b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19" Dec 05 14:17:03 crc kubenswrapper[4858]: E1205 14:17:03.796566 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19\": container with ID starting with b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19 not found: ID does not exist" containerID="b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.796604 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19"} err="failed to get container status \"b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19\": rpc error: code = NotFound desc = could not find container \"b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19\": container with ID starting with b6a127f4fd3ec2c28af50d6f1d76627d920f4595d2fa40520c4672b2374d2b19 not found: ID does not exist" Dec 05 14:17:03 crc kubenswrapper[4858]: I1205 14:17:03.929089 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" path="/var/lib/kubelet/pods/53dddd76-03ec-457c-b202-4a181872ea4e/volumes" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.157666 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d6856d7d8-ln5hc"] Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.157904 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6d6856d7d8-ln5hc" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" containerID="cri-o://d36b6edcf130177e6b1ba93276b0d588277a4fe9d7d2c482ecd20ecf3f54bb18" gracePeriod=60 Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.162171 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d6856d7d8-ln5hc" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.176:8004/healthcheck\": EOF" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.162218 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6d6856d7d8-ln5hc" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.176:8004/healthcheck\": EOF" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.193915 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7974d785f8-5hhw6"] Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.194163 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" containerID="cri-o://f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0" gracePeriod=60 Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.208767 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-588f4db944-bv2ww"] Dec 05 14:17:04 crc kubenswrapper[4858]: E1205 14:17:04.209187 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="dnsmasq-dns" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.209205 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="dnsmasq-dns" Dec 05 14:17:04 crc kubenswrapper[4858]: E1205 14:17:04.209219 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="init" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.209226 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="init" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.209404 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="dnsmasq-dns" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.210123 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.222433 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.222602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.223331 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.174:8000/healthcheck\": EOF" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.224016 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.174:8000/healthcheck\": EOF" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.230405 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-588f4db944-bv2ww"] Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.302870 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-695d8f5ccf-btdn9"] Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.304237 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.313579 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.313815 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.326668 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-695d8f5ccf-btdn9"] Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-config-data-custom\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-combined-ca-bundle\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363578 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-public-tls-certs\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363604 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-internal-tls-certs\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363646 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-config-data-custom\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blbv\" (UniqueName: \"kubernetes.io/projected/dff61e5e-248c-4f44-91e1-ba2cad7c063f-kube-api-access-7blbv\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363730 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-internal-tls-certs\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-config-data\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-combined-ca-bundle\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363939 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-public-tls-certs\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.363989 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-config-data\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.364012 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hbz\" (UniqueName: \"kubernetes.io/projected/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-kube-api-access-x7hbz\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.450606 4858 scope.go:117] "RemoveContainer" containerID="36b6bd66da07e63852aa7c604e412eb40ad1114849b374fb88b1452dfd7ec797" Dec 05 14:17:04 crc kubenswrapper[4858]: E1205 14:17:04.450936 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5648884998-l7brp_openstack(7c788136-a2e0-462f-bda7-a49478e425c7)\"" pod="openstack/heat-cfnapi-5648884998-l7brp" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.452010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69596c746f-2zqwj" event={"ID":"4a64adf2-e7c1-41a6-9e42-ec919354ad16","Type":"ContainerStarted","Data":"d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587"} Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.452174 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.469940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blbv\" (UniqueName: \"kubernetes.io/projected/dff61e5e-248c-4f44-91e1-ba2cad7c063f-kube-api-access-7blbv\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.469989 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-internal-tls-certs\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470014 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-config-data\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470037 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-combined-ca-bundle\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470087 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-public-tls-certs\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470136 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-config-data\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470162 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7hbz\" (UniqueName: \"kubernetes.io/projected/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-kube-api-access-x7hbz\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470263 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-config-data-custom\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-combined-ca-bundle\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470397 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-public-tls-certs\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-internal-tls-certs\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.470581 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-config-data-custom\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.478769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-combined-ca-bundle\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.479886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-config-data-custom\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.481746 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-internal-tls-certs\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.487711 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-public-tls-certs\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.490298 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-config-data\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.491983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-public-tls-certs\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.499627 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-internal-tls-certs\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.501212 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff61e5e-248c-4f44-91e1-ba2cad7c063f-config-data-custom\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.503437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-combined-ca-bundle\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.509501 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blbv\" (UniqueName: \"kubernetes.io/projected/dff61e5e-248c-4f44-91e1-ba2cad7c063f-kube-api-access-7blbv\") pod \"heat-api-588f4db944-bv2ww\" (UID: \"dff61e5e-248c-4f44-91e1-ba2cad7c063f\") " pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.510760 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-config-data\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.513887 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7hbz\" (UniqueName: \"kubernetes.io/projected/d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f-kube-api-access-x7hbz\") pod \"heat-cfnapi-695d8f5ccf-btdn9\" (UID: \"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f\") " pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.534323 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.628232 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.864950 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.872120 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:17:04 crc kubenswrapper[4858]: I1205 14:17:04.872175 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.346367 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-69596c746f-2zqwj" podStartSLOduration=6.346346911 podStartE2EDuration="6.346346911s" podCreationTimestamp="2025-12-05 14:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:04.497086592 +0000 UTC m=+1233.044684731" watchObservedRunningTime="2025-12-05 14:17:05.346346911 +0000 UTC m=+1233.893945050" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.347710 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-695d8f5ccf-btdn9"] Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.432764 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-588f4db944-bv2ww"] Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.468214 4858 generic.go:334] "Generic (PLEG): container finished" podID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerID="1259a84d33db1d12af20110a104f0023afa884db9b90923a2a0b1f0fa334f35a" exitCode=0 Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.468269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d51a537e-24d5-4083-8c7a-8e7abd0abd49","Type":"ContainerDied","Data":"1259a84d33db1d12af20110a104f0023afa884db9b90923a2a0b1f0fa334f35a"} Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.474982 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" event={"ID":"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f","Type":"ContainerStarted","Data":"4a7a0b53f23370ee61b06aeaa65ca86947e47c4b30e4845322c26bd7e82ac511"} Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.484985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-588f4db944-bv2ww" event={"ID":"dff61e5e-248c-4f44-91e1-ba2cad7c063f","Type":"ContainerStarted","Data":"73107f830ab24a5210cf4bcd0e17c4f2982e9bda2ada3ede11f0c777b04388b7"} Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.538399 4858 generic.go:334] "Generic (PLEG): container finished" podID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerID="d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587" exitCode=1 Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.540684 4858 scope.go:117] "RemoveContainer" containerID="d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587" Dec 05 14:17:05 crc kubenswrapper[4858]: E1205 14:17:05.541012 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69596c746f-2zqwj_openstack(4a64adf2-e7c1-41a6-9e42-ec919354ad16)\"" pod="openstack/heat-api-69596c746f-2zqwj" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.541375 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69596c746f-2zqwj" event={"ID":"4a64adf2-e7c1-41a6-9e42-ec919354ad16","Type":"ContainerDied","Data":"d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587"} Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.541422 4858 scope.go:117] "RemoveContainer" containerID="f36b4c3a5bfd113b428f2f4e351acd7a595ca857cd09659edcf884d2b4eac6ee" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.541982 4858 scope.go:117] "RemoveContainer" containerID="36b6bd66da07e63852aa7c604e412eb40ad1114849b374fb88b1452dfd7ec797" Dec 05 14:17:05 crc kubenswrapper[4858]: E1205 14:17:05.542244 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5648884998-l7brp_openstack(7c788136-a2e0-462f-bda7-a49478e425c7)\"" pod="openstack/heat-cfnapi-5648884998-l7brp" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.825251 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.908148 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-scripts\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909215 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-public-tls-certs\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909341 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-config-data\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909410 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-httpd-run\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909520 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909664 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-combined-ca-bundle\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909772 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg82r\" (UniqueName: \"kubernetes.io/projected/d51a537e-24d5-4083-8c7a-8e7abd0abd49-kube-api-access-hg82r\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.909876 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-logs\") pod \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\" (UID: \"d51a537e-24d5-4083-8c7a-8e7abd0abd49\") " Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.913394 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-logs" (OuterVolumeSpecName: "logs") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.913772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.918317 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d51a537e-24d5-4083-8c7a-8e7abd0abd49-kube-api-access-hg82r" (OuterVolumeSpecName: "kube-api-access-hg82r") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "kube-api-access-hg82r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.932795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-scripts" (OuterVolumeSpecName: "scripts") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:05 crc kubenswrapper[4858]: I1205 14:17:05.935130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.013339 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.013396 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.013424 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.013449 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg82r\" (UniqueName: \"kubernetes.io/projected/d51a537e-24d5-4083-8c7a-8e7abd0abd49-kube-api-access-hg82r\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.013463 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d51a537e-24d5-4083-8c7a-8e7abd0abd49-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.097803 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.115278 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.141214 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-config-data" (OuterVolumeSpecName: "config-data") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.168988 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.190420 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d51a537e-24d5-4083-8c7a-8e7abd0abd49" (UID: "d51a537e-24d5-4083-8c7a-8e7abd0abd49"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.217041 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.217154 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.217223 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a537e-24d5-4083-8c7a-8e7abd0abd49-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.417071 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b99cfc7-qvxwd" podUID="53dddd76-03ec-457c-b202-4a181872ea4e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.549685 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" event={"ID":"d6f1ab0d-fbf7-46a2-8bfe-d8ee0046761f","Type":"ContainerStarted","Data":"9bc32f3daaf7c222c93e2426970f869b6a83b221d37ef3593c21bb629da6a9bc"} Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.551176 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.552607 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-588f4db944-bv2ww" event={"ID":"dff61e5e-248c-4f44-91e1-ba2cad7c063f","Type":"ContainerStarted","Data":"a2d45f81f4b5c33beb7c05b50de57b8682faef48173dd0d209c65884bfc8a38f"} Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.553474 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.559255 4858 scope.go:117] "RemoveContainer" containerID="d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587" Dec 05 14:17:06 crc kubenswrapper[4858]: E1205 14:17:06.559619 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69596c746f-2zqwj_openstack(4a64adf2-e7c1-41a6-9e42-ec919354ad16)\"" pod="openstack/heat-api-69596c746f-2zqwj" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.568320 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d51a537e-24d5-4083-8c7a-8e7abd0abd49","Type":"ContainerDied","Data":"d894c9491fa27ab9ac494a470e659516bd4ebc0b8dc7b0925f0b4ee9822c3456"} Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.568528 4858 scope.go:117] "RemoveContainer" containerID="1259a84d33db1d12af20110a104f0023afa884db9b90923a2a0b1f0fa334f35a" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.568680 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.623557 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" podStartSLOduration=2.62353881 podStartE2EDuration="2.62353881s" podCreationTimestamp="2025-12-05 14:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:06.583638303 +0000 UTC m=+1235.131236442" watchObservedRunningTime="2025-12-05 14:17:06.62353881 +0000 UTC m=+1235.171136949" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.637441 4858 scope.go:117] "RemoveContainer" containerID="616ce3032bc1ee0e159ff6f2cae66db2cfe77778cf3071963e54c7fd762839f9" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.643332 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-588f4db944-bv2ww" podStartSLOduration=2.643320395 podStartE2EDuration="2.643320395s" podCreationTimestamp="2025-12-05 14:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:06.640725234 +0000 UTC m=+1235.188323393" watchObservedRunningTime="2025-12-05 14:17:06.643320395 +0000 UTC m=+1235.190918534" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.684862 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.691065 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.726479 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:17:06 crc kubenswrapper[4858]: E1205 14:17:06.727065 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-httpd" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.727131 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-httpd" Dec 05 14:17:06 crc kubenswrapper[4858]: E1205 14:17:06.727201 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-log" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.727280 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-log" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.727733 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-httpd" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.727811 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" containerName="glance-log" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.728879 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.732489 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.732718 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.747912 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831264 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-config-data\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebad303f-6b9b-4ae1-b012-0862a6280179-logs\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwx5x\" (UniqueName: \"kubernetes.io/projected/ebad303f-6b9b-4ae1-b012-0862a6280179-kube-api-access-xwx5x\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-scripts\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebad303f-6b9b-4ae1-b012-0862a6280179-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831636 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.831671 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933334 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933367 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933399 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-config-data\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933459 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebad303f-6b9b-4ae1-b012-0862a6280179-logs\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwx5x\" (UniqueName: \"kubernetes.io/projected/ebad303f-6b9b-4ae1-b012-0862a6280179-kube-api-access-xwx5x\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933590 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-scripts\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933620 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebad303f-6b9b-4ae1-b012-0862a6280179-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.933663 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.934430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebad303f-6b9b-4ae1-b012-0862a6280179-logs\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.941486 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebad303f-6b9b-4ae1-b012-0862a6280179-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.946779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.947028 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.948890 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-config-data\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.951095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebad303f-6b9b-4ae1-b012-0862a6280179-scripts\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.964392 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwx5x\" (UniqueName: \"kubernetes.io/projected/ebad303f-6b9b-4ae1-b012-0862a6280179-kube-api-access-xwx5x\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:06 crc kubenswrapper[4858]: I1205 14:17:06.983406 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"ebad303f-6b9b-4ae1-b012-0862a6280179\") " pod="openstack/glance-default-external-api-0" Dec 05 14:17:07 crc kubenswrapper[4858]: I1205 14:17:07.053094 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:17:07 crc kubenswrapper[4858]: I1205 14:17:07.105208 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 14:17:07 crc kubenswrapper[4858]: I1205 14:17:07.579256 4858 scope.go:117] "RemoveContainer" containerID="d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587" Dec 05 14:17:07 crc kubenswrapper[4858]: E1205 14:17:07.579717 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69596c746f-2zqwj_openstack(4a64adf2-e7c1-41a6-9e42-ec919354ad16)\"" pod="openstack/heat-api-69596c746f-2zqwj" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" Dec 05 14:17:07 crc kubenswrapper[4858]: I1205 14:17:07.751636 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 14:17:07 crc kubenswrapper[4858]: I1205 14:17:07.923359 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d51a537e-24d5-4083-8c7a-8e7abd0abd49" path="/var/lib/kubelet/pods/d51a537e-24d5-4083-8c7a-8e7abd0abd49/volumes" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.024063 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d6856d7d8-ln5hc" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.176:8004/healthcheck\": read tcp 10.217.0.2:43218->10.217.0.176:8004: read: connection reset by peer" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.024538 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d6856d7d8-ln5hc" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.176:8004/healthcheck\": dial tcp 10.217.0.176:8004: connect: connection refused" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.122510 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.123631 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-log" containerID="cri-o://b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382" gracePeriod=30 Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.125576 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-httpd" containerID="cri-o://d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba" gracePeriod=30 Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.592149 4858 generic.go:334] "Generic (PLEG): container finished" podID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerID="195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169" exitCode=0 Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.592208 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerDied","Data":"195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169"} Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.594852 4858 generic.go:334] "Generic (PLEG): container finished" podID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerID="d36b6edcf130177e6b1ba93276b0d588277a4fe9d7d2c482ecd20ecf3f54bb18" exitCode=0 Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.594907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d6856d7d8-ln5hc" event={"ID":"a0b76ef1-2ed0-4844-bb75-adafdc72e742","Type":"ContainerDied","Data":"d36b6edcf130177e6b1ba93276b0d588277a4fe9d7d2c482ecd20ecf3f54bb18"} Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.594931 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d6856d7d8-ln5hc" event={"ID":"a0b76ef1-2ed0-4844-bb75-adafdc72e742","Type":"ContainerDied","Data":"2d77da13dc760ebd2242b1b723d44ffe7424eb3423f85cdc962b3b82da3d1f82"} Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.594940 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d77da13dc760ebd2242b1b723d44ffe7424eb3423f85cdc962b3b82da3d1f82" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.596781 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebad303f-6b9b-4ae1-b012-0862a6280179","Type":"ContainerStarted","Data":"dcc48beb893581b3949de4ecc6e228baa2ceaf804f48dc6ebb03272d34c0d0ad"} Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.601248 4858 generic.go:334] "Generic (PLEG): container finished" podID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerID="b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382" exitCode=143 Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.601318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9217a0b0-fdbc-4a4b-8580-57e50d4240d6","Type":"ContainerDied","Data":"b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382"} Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.603386 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.662364 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.713427 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data-custom\") pod \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.713566 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-combined-ca-bundle\") pod \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.713605 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data\") pod \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.713740 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srnv2\" (UniqueName: \"kubernetes.io/projected/a0b76ef1-2ed0-4844-bb75-adafdc72e742-kube-api-access-srnv2\") pod \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\" (UID: \"a0b76ef1-2ed0-4844-bb75-adafdc72e742\") " Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.729465 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a0b76ef1-2ed0-4844-bb75-adafdc72e742" (UID: "a0b76ef1-2ed0-4844-bb75-adafdc72e742"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.734613 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b76ef1-2ed0-4844-bb75-adafdc72e742-kube-api-access-srnv2" (OuterVolumeSpecName: "kube-api-access-srnv2") pod "a0b76ef1-2ed0-4844-bb75-adafdc72e742" (UID: "a0b76ef1-2ed0-4844-bb75-adafdc72e742"). InnerVolumeSpecName "kube-api-access-srnv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.779932 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0b76ef1-2ed0-4844-bb75-adafdc72e742" (UID: "a0b76ef1-2ed0-4844-bb75-adafdc72e742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.820482 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srnv2\" (UniqueName: \"kubernetes.io/projected/a0b76ef1-2ed0-4844-bb75-adafdc72e742-kube-api-access-srnv2\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.820663 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.820741 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.823528 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data" (OuterVolumeSpecName: "config-data") pod "a0b76ef1-2ed0-4844-bb75-adafdc72e742" (UID: "a0b76ef1-2ed0-4844-bb75-adafdc72e742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.922783 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0b76ef1-2ed0-4844-bb75-adafdc72e742-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:08 crc kubenswrapper[4858]: I1205 14:17:08.961508 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.637601 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d6856d7d8-ln5hc" Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.638010 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebad303f-6b9b-4ae1-b012-0862a6280179","Type":"ContainerStarted","Data":"ffe2d7e48ff5877af3845ef3af6536b2079bce21c2aa5ac432f8b712f17b7bdb"} Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.701727 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d6856d7d8-ln5hc"] Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.718166 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d6856d7d8-ln5hc"] Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.864939 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.866067 4858 scope.go:117] "RemoveContainer" containerID="d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587" Dec 05 14:17:09 crc kubenswrapper[4858]: E1205 14:17:09.866646 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69596c746f-2zqwj_openstack(4a64adf2-e7c1-41a6-9e42-ec919354ad16)\"" pod="openstack/heat-api-69596c746f-2zqwj" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" Dec 05 14:17:09 crc kubenswrapper[4858]: I1205 14:17:09.912604 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" path="/var/lib/kubelet/pods/a0b76ef1-2ed0-4844-bb75-adafdc72e742/volumes" Dec 05 14:17:10 crc kubenswrapper[4858]: I1205 14:17:10.463941 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:17:10 crc kubenswrapper[4858]: I1205 14:17:10.646531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebad303f-6b9b-4ae1-b012-0862a6280179","Type":"ContainerStarted","Data":"6aa5151360cf432a22526feb1d417b48743efa351b182080b80f2df7e450f10a"} Dec 05 14:17:10 crc kubenswrapper[4858]: I1205 14:17:10.677236 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.677217829 podStartE2EDuration="4.677217829s" podCreationTimestamp="2025-12-05 14:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:10.669442709 +0000 UTC m=+1239.217040848" watchObservedRunningTime="2025-12-05 14:17:10.677217829 +0000 UTC m=+1239.224815968" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.017392 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.174:8000/healthcheck\": read tcp 10.217.0.2:57830->10.217.0.174:8000: read: connection reset by peer" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.017980 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.174:8000/healthcheck\": dial tcp 10.217.0.174:8000: connect: connection refused" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.569600 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.613497 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:50852->10.217.0.152:9292: read: connection reset by peer" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.613853 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:50850->10.217.0.152:9292: read: connection reset by peer" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.671629 4858 generic.go:334] "Generic (PLEG): container finished" podID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerID="f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0" exitCode=0 Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.671717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" event={"ID":"0f24dddb-47d0-42be-9ca3-c3b61bd1580a","Type":"ContainerDied","Data":"f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0"} Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.671749 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.671769 4858 scope.go:117] "RemoveContainer" containerID="f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.671757 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7974d785f8-5hhw6" event={"ID":"0f24dddb-47d0-42be-9ca3-c3b61bd1580a","Type":"ContainerDied","Data":"40223cb06ac0df265f0c6972aee684bc08ba5f30268352148b881c937707fbc1"} Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.682770 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmpqh\" (UniqueName: \"kubernetes.io/projected/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-kube-api-access-gmpqh\") pod \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.682843 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data-custom\") pod \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.682998 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-combined-ca-bundle\") pod \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.683019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data\") pod \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\" (UID: \"0f24dddb-47d0-42be-9ca3-c3b61bd1580a\") " Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.699852 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f24dddb-47d0-42be-9ca3-c3b61bd1580a" (UID: "0f24dddb-47d0-42be-9ca3-c3b61bd1580a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.712122 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-kube-api-access-gmpqh" (OuterVolumeSpecName: "kube-api-access-gmpqh") pod "0f24dddb-47d0-42be-9ca3-c3b61bd1580a" (UID: "0f24dddb-47d0-42be-9ca3-c3b61bd1580a"). InnerVolumeSpecName "kube-api-access-gmpqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.786082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f24dddb-47d0-42be-9ca3-c3b61bd1580a" (UID: "0f24dddb-47d0-42be-9ca3-c3b61bd1580a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.799142 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.799179 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmpqh\" (UniqueName: \"kubernetes.io/projected/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-kube-api-access-gmpqh\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.799190 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.803762 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data" (OuterVolumeSpecName: "config-data") pod "0f24dddb-47d0-42be-9ca3-c3b61bd1580a" (UID: "0f24dddb-47d0-42be-9ca3-c3b61bd1580a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.804587 4858 scope.go:117] "RemoveContainer" containerID="f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0" Dec 05 14:17:11 crc kubenswrapper[4858]: E1205 14:17:11.806277 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0\": container with ID starting with f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0 not found: ID does not exist" containerID="f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.806345 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0"} err="failed to get container status \"f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0\": rpc error: code = NotFound desc = could not find container \"f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0\": container with ID starting with f6ae3daa0bfe3f6b9bee19028d67167b6a230cfd191e643c102bc4d223cbded0 not found: ID does not exist" Dec 05 14:17:11 crc kubenswrapper[4858]: I1205 14:17:11.902352 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f24dddb-47d0-42be-9ca3-c3b61bd1580a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.013884 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7974d785f8-5hhw6"] Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.020052 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7974d785f8-5hhw6"] Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.096515 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-httpd-run\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212308 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-internal-tls-certs\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212330 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-logs\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212372 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-combined-ca-bundle\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212391 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-scripts\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212458 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfvdj\" (UniqueName: \"kubernetes.io/projected/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-kube-api-access-zfvdj\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212492 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212555 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-config-data\") pod \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\" (UID: \"9217a0b0-fdbc-4a4b-8580-57e50d4240d6\") " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.212966 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.213940 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.214576 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-logs" (OuterVolumeSpecName: "logs") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.227388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.235956 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-scripts" (OuterVolumeSpecName: "scripts") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.236461 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-kube-api-access-zfvdj" (OuterVolumeSpecName: "kube-api-access-zfvdj") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "kube-api-access-zfvdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.264529 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.315802 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfvdj\" (UniqueName: \"kubernetes.io/projected/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-kube-api-access-zfvdj\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.315866 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.315876 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.315886 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.315895 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.329999 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-config-data" (OuterVolumeSpecName: "config-data") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.336867 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9217a0b0-fdbc-4a4b-8580-57e50d4240d6" (UID: "9217a0b0-fdbc-4a4b-8580-57e50d4240d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.362456 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.418232 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.418508 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.418614 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9217a0b0-fdbc-4a4b-8580-57e50d4240d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.680321 4858 generic.go:334] "Generic (PLEG): container finished" podID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerID="d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba" exitCode=0 Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.680605 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.680626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9217a0b0-fdbc-4a4b-8580-57e50d4240d6","Type":"ContainerDied","Data":"d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba"} Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.681273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9217a0b0-fdbc-4a4b-8580-57e50d4240d6","Type":"ContainerDied","Data":"8c771fc48da6d7066f7fcc7cef9fe6988b3b541d971661c63dcdd3f0c33e69d4"} Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.681328 4858 scope.go:117] "RemoveContainer" containerID="d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.725929 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.726926 4858 scope.go:117] "RemoveContainer" containerID="b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.742154 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.754589 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:17:12 crc kubenswrapper[4858]: E1205 14:17:12.755234 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.755373 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" Dec 05 14:17:12 crc kubenswrapper[4858]: E1205 14:17:12.755443 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-log" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.755514 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-log" Dec 05 14:17:12 crc kubenswrapper[4858]: E1205 14:17:12.755594 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-httpd" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.755687 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-httpd" Dec 05 14:17:12 crc kubenswrapper[4858]: E1205 14:17:12.755754 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.755947 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.756345 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" containerName="heat-cfnapi" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.756566 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b76ef1-2ed0-4844-bb75-adafdc72e742" containerName="heat-api" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.756654 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-httpd" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.756832 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" containerName="glance-log" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.763401 4858 scope.go:117] "RemoveContainer" containerID="d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.763780 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: E1205 14:17:12.767573 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba\": container with ID starting with d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba not found: ID does not exist" containerID="d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.767611 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba"} err="failed to get container status \"d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba\": rpc error: code = NotFound desc = could not find container \"d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba\": container with ID starting with d9385435521097a2bb3ec2890857206bb2bf6f648be48b645e172e8b035c42ba not found: ID does not exist" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.767640 4858 scope.go:117] "RemoveContainer" containerID="b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.767961 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 14:17:12 crc kubenswrapper[4858]: E1205 14:17:12.768450 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382\": container with ID starting with b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382 not found: ID does not exist" containerID="b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.768478 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382"} err="failed to get container status \"b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382\": rpc error: code = NotFound desc = could not find container \"b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382\": container with ID starting with b11bb3ba534dac7e146854b88156e9ba1a7c1d8680cf636f2b85c594dfe1f382 not found: ID does not exist" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.768594 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.769918 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.825506 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.825561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac719845-f27b-4899-b245-487bcda2a5b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.825591 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.825635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.826139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.826244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxv5\" (UniqueName: \"kubernetes.io/projected/ac719845-f27b-4899-b245-487bcda2a5b8-kube-api-access-nsxv5\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.826416 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.826634 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac719845-f27b-4899-b245-487bcda2a5b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.927990 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928084 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxv5\" (UniqueName: \"kubernetes.io/projected/ac719845-f27b-4899-b245-487bcda2a5b8-kube-api-access-nsxv5\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928231 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac719845-f27b-4899-b245-487bcda2a5b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928260 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac719845-f27b-4899-b245-487bcda2a5b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928304 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.928354 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.929540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac719845-f27b-4899-b245-487bcda2a5b8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.931136 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.931715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac719845-f27b-4899-b245-487bcda2a5b8-logs\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.935586 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.937927 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.938640 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.940695 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac719845-f27b-4899-b245-487bcda2a5b8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.952030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxv5\" (UniqueName: \"kubernetes.io/projected/ac719845-f27b-4899-b245-487bcda2a5b8-kube-api-access-nsxv5\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:12 crc kubenswrapper[4858]: I1205 14:17:12.985914 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ac719845-f27b-4899-b245-487bcda2a5b8\") " pod="openstack/glance-default-internal-api-0" Dec 05 14:17:13 crc kubenswrapper[4858]: I1205 14:17:13.130549 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:13 crc kubenswrapper[4858]: I1205 14:17:13.835709 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 14:17:13 crc kubenswrapper[4858]: W1205 14:17:13.845023 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac719845_f27b_4899_b245_487bcda2a5b8.slice/crio-5cd9fa461819dec6b2aa6ca45402e3f50b3cdec1d2c7e98ab7503c1995d3238b WatchSource:0}: Error finding container 5cd9fa461819dec6b2aa6ca45402e3f50b3cdec1d2c7e98ab7503c1995d3238b: Status 404 returned error can't find the container with id 5cd9fa461819dec6b2aa6ca45402e3f50b3cdec1d2c7e98ab7503c1995d3238b Dec 05 14:17:13 crc kubenswrapper[4858]: I1205 14:17:13.911241 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f24dddb-47d0-42be-9ca3-c3b61bd1580a" path="/var/lib/kubelet/pods/0f24dddb-47d0-42be-9ca3-c3b61bd1580a/volumes" Dec 05 14:17:13 crc kubenswrapper[4858]: I1205 14:17:13.912181 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9217a0b0-fdbc-4a4b-8580-57e50d4240d6" path="/var/lib/kubelet/pods/9217a0b0-fdbc-4a4b-8580-57e50d4240d6/volumes" Dec 05 14:17:14 crc kubenswrapper[4858]: I1205 14:17:14.717273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ac719845-f27b-4899-b245-487bcda2a5b8","Type":"ContainerStarted","Data":"33194fe5b0ef897a215968d778cdab2872aae2d5037a5bfb6114dacb329d48c3"} Dec 05 14:17:14 crc kubenswrapper[4858]: I1205 14:17:14.717608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ac719845-f27b-4899-b245-487bcda2a5b8","Type":"ContainerStarted","Data":"5cd9fa461819dec6b2aa6ca45402e3f50b3cdec1d2c7e98ab7503c1995d3238b"} Dec 05 14:17:14 crc kubenswrapper[4858]: I1205 14:17:14.759917 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:17:14 crc kubenswrapper[4858]: I1205 14:17:14.759964 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:17:15 crc kubenswrapper[4858]: I1205 14:17:15.732447 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ac719845-f27b-4899-b245-487bcda2a5b8","Type":"ContainerStarted","Data":"429a22d408000bc9fa29613377b809f6264c4b5a74656ef2fe32f585b4b5dc66"} Dec 05 14:17:15 crc kubenswrapper[4858]: I1205 14:17:15.763328 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.763304543 podStartE2EDuration="3.763304543s" podCreationTimestamp="2025-12-05 14:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:15.751964687 +0000 UTC m=+1244.299562826" watchObservedRunningTime="2025-12-05 14:17:15.763304543 +0000 UTC m=+1244.310902682" Dec 05 14:17:16 crc kubenswrapper[4858]: I1205 14:17:16.433280 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-588f4db944-bv2ww" Dec 05 14:17:16 crc kubenswrapper[4858]: I1205 14:17:16.492248 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69596c746f-2zqwj"] Dec 05 14:17:16 crc kubenswrapper[4858]: I1205 14:17:16.585680 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-695d8f5ccf-btdn9" Dec 05 14:17:16 crc kubenswrapper[4858]: I1205 14:17:16.675834 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5648884998-l7brp"] Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.047510 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.110054 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.110090 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.159861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-combined-ca-bundle\") pod \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.159920 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data-custom\") pod \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.159948 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpllv\" (UniqueName: \"kubernetes.io/projected/4a64adf2-e7c1-41a6-9e42-ec919354ad16-kube-api-access-vpllv\") pod \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.159999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data\") pod \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\" (UID: \"4a64adf2-e7c1-41a6-9e42-ec919354ad16\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.174211 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a64adf2-e7c1-41a6-9e42-ec919354ad16-kube-api-access-vpllv" (OuterVolumeSpecName: "kube-api-access-vpllv") pod "4a64adf2-e7c1-41a6-9e42-ec919354ad16" (UID: "4a64adf2-e7c1-41a6-9e42-ec919354ad16"). InnerVolumeSpecName "kube-api-access-vpllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.183611 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.190369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a64adf2-e7c1-41a6-9e42-ec919354ad16" (UID: "4a64adf2-e7c1-41a6-9e42-ec919354ad16"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.214136 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.239767 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data" (OuterVolumeSpecName: "config-data") pod "4a64adf2-e7c1-41a6-9e42-ec919354ad16" (UID: "4a64adf2-e7c1-41a6-9e42-ec919354ad16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.261903 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.261927 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpllv\" (UniqueName: \"kubernetes.io/projected/4a64adf2-e7c1-41a6-9e42-ec919354ad16-kube-api-access-vpllv\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.261938 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.262223 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a64adf2-e7c1-41a6-9e42-ec919354ad16" (UID: "4a64adf2-e7c1-41a6-9e42-ec919354ad16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.340478 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.363151 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a64adf2-e7c1-41a6-9e42-ec919354ad16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.464680 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7bkl\" (UniqueName: \"kubernetes.io/projected/7c788136-a2e0-462f-bda7-a49478e425c7-kube-api-access-j7bkl\") pod \"7c788136-a2e0-462f-bda7-a49478e425c7\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.464874 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data-custom\") pod \"7c788136-a2e0-462f-bda7-a49478e425c7\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.465384 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data\") pod \"7c788136-a2e0-462f-bda7-a49478e425c7\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.465420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-combined-ca-bundle\") pod \"7c788136-a2e0-462f-bda7-a49478e425c7\" (UID: \"7c788136-a2e0-462f-bda7-a49478e425c7\") " Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.468192 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7c788136-a2e0-462f-bda7-a49478e425c7" (UID: "7c788136-a2e0-462f-bda7-a49478e425c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.470993 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c788136-a2e0-462f-bda7-a49478e425c7-kube-api-access-j7bkl" (OuterVolumeSpecName: "kube-api-access-j7bkl") pod "7c788136-a2e0-462f-bda7-a49478e425c7" (UID: "7c788136-a2e0-462f-bda7-a49478e425c7"). InnerVolumeSpecName "kube-api-access-j7bkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.493796 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c788136-a2e0-462f-bda7-a49478e425c7" (UID: "7c788136-a2e0-462f-bda7-a49478e425c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.513188 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data" (OuterVolumeSpecName: "config-data") pod "7c788136-a2e0-462f-bda7-a49478e425c7" (UID: "7c788136-a2e0-462f-bda7-a49478e425c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.567574 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.567611 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.567624 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c788136-a2e0-462f-bda7-a49478e425c7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.567637 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7bkl\" (UniqueName: \"kubernetes.io/projected/7c788136-a2e0-462f-bda7-a49478e425c7-kube-api-access-j7bkl\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.760720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5648884998-l7brp" event={"ID":"7c788136-a2e0-462f-bda7-a49478e425c7","Type":"ContainerDied","Data":"d3d857b849c893b983a005ee088f7679b823a8ee09ca3b5c05dd5e71d868acb0"} Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.760780 4858 scope.go:117] "RemoveContainer" containerID="36b6bd66da07e63852aa7c604e412eb40ad1114849b374fb88b1452dfd7ec797" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.760955 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5648884998-l7brp" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.767445 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69596c746f-2zqwj" event={"ID":"4a64adf2-e7c1-41a6-9e42-ec919354ad16","Type":"ContainerDied","Data":"1303ee0a740b2a3e48cb39c41ed67c16b6030f6ca982df5809a2a90f74183fb1"} Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.767545 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69596c746f-2zqwj" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.767986 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.771235 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.793900 4858 scope.go:117] "RemoveContainer" containerID="d5f71e1617e41ffeeab372e991d16b5caa9ac883a3591c6796df2b17aa992587" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.814146 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5648884998-l7brp"] Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.823809 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5648884998-l7brp"] Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.832545 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69596c746f-2zqwj"] Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.842413 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-69596c746f-2zqwj"] Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.913247 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" path="/var/lib/kubelet/pods/4a64adf2-e7c1-41a6-9e42-ec919354ad16/volumes" Dec 05 14:17:17 crc kubenswrapper[4858]: I1205 14:17:17.913913 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" path="/var/lib/kubelet/pods/7c788136-a2e0-462f-bda7-a49478e425c7/volumes" Dec 05 14:17:18 crc kubenswrapper[4858]: I1205 14:17:18.656160 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:17:18 crc kubenswrapper[4858]: I1205 14:17:18.962368 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Dec 05 14:17:19 crc kubenswrapper[4858]: I1205 14:17:19.793761 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:17:19 crc kubenswrapper[4858]: I1205 14:17:19.793797 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:17:20 crc kubenswrapper[4858]: I1205 14:17:20.414086 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 14:17:20 crc kubenswrapper[4858]: I1205 14:17:20.639504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-64fbdd66f9-gghv6" Dec 05 14:17:20 crc kubenswrapper[4858]: I1205 14:17:20.682814 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7c5f557b4c-fdhxg"] Dec 05 14:17:20 crc kubenswrapper[4858]: I1205 14:17:20.683408 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7c5f557b4c-fdhxg" podUID="b958f7a4-1b99-4ce8-badb-52855609ec9d" containerName="heat-engine" containerID="cri-o://928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f" gracePeriod=60 Dec 05 14:17:20 crc kubenswrapper[4858]: I1205 14:17:20.802923 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:17:21 crc kubenswrapper[4858]: I1205 14:17:21.916177 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.131839 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.131882 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.181505 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.201557 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.274026 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.836081 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:23 crc kubenswrapper[4858]: I1205 14:17:23.836427 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:25 crc kubenswrapper[4858]: I1205 14:17:25.864423 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:17:25 crc kubenswrapper[4858]: I1205 14:17:25.864692 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:17:27 crc kubenswrapper[4858]: I1205 14:17:27.609846 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:27 crc kubenswrapper[4858]: I1205 14:17:27.611055 4858 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 14:17:27 crc kubenswrapper[4858]: I1205 14:17:27.840410 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 14:17:30 crc kubenswrapper[4858]: E1205 14:17:30.424142 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 05 14:17:30 crc kubenswrapper[4858]: E1205 14:17:30.425784 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 05 14:17:30 crc kubenswrapper[4858]: E1205 14:17:30.427045 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 05 14:17:30 crc kubenswrapper[4858]: E1205 14:17:30.427084 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7c5f557b4c-fdhxg" podUID="b958f7a4-1b99-4ce8-badb-52855609ec9d" containerName="heat-engine" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.693590 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.814707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-sg-core-conf-yaml\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.815040 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lndw\" (UniqueName: \"kubernetes.io/projected/81d84be8-b4b4-4e29-a94c-fcca489809fb-kube-api-access-7lndw\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.818976 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-combined-ca-bundle\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.819075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-config-data\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.819245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-log-httpd\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.819279 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-scripts\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.819312 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-run-httpd\") pod \"81d84be8-b4b4-4e29-a94c-fcca489809fb\" (UID: \"81d84be8-b4b4-4e29-a94c-fcca489809fb\") " Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.822317 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.822874 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.855151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d84be8-b4b4-4e29-a94c-fcca489809fb-kube-api-access-7lndw" (OuterVolumeSpecName: "kube-api-access-7lndw") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "kube-api-access-7lndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.863369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-scripts" (OuterVolumeSpecName: "scripts") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.889271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.929216 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.929250 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.929264 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/81d84be8-b4b4-4e29-a94c-fcca489809fb-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.929280 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.929293 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lndw\" (UniqueName: \"kubernetes.io/projected/81d84be8-b4b4-4e29-a94c-fcca489809fb-kube-api-access-7lndw\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.949361 4858 generic.go:334] "Generic (PLEG): container finished" podID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerID="dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834" exitCode=137 Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.949463 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.965139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerDied","Data":"dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834"} Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.965351 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"81d84be8-b4b4-4e29-a94c-fcca489809fb","Type":"ContainerDied","Data":"cee21aec4f7d30122a10b2d56145059c9ed50792e49a7c987a9cb3f6a3c06ba4"} Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.965433 4858 scope.go:117] "RemoveContainer" containerID="dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834" Dec 05 14:17:31 crc kubenswrapper[4858]: I1205 14:17:31.984483 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.000126 4858 scope.go:117] "RemoveContainer" containerID="32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.030570 4858 scope.go:117] "RemoveContainer" containerID="85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.031783 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.049369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-config-data" (OuterVolumeSpecName: "config-data") pod "81d84be8-b4b4-4e29-a94c-fcca489809fb" (UID: "81d84be8-b4b4-4e29-a94c-fcca489809fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.049577 4858 scope.go:117] "RemoveContainer" containerID="195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.080257 4858 scope.go:117] "RemoveContainer" containerID="dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.081170 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834\": container with ID starting with dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834 not found: ID does not exist" containerID="dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.081212 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834"} err="failed to get container status \"dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834\": rpc error: code = NotFound desc = could not find container \"dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834\": container with ID starting with dc1b65a1e1f870e062830187617081c8e4cd6484147800dc75832e64f2adc834 not found: ID does not exist" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.081242 4858 scope.go:117] "RemoveContainer" containerID="32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.081496 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103\": container with ID starting with 32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103 not found: ID does not exist" containerID="32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.081521 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103"} err="failed to get container status \"32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103\": rpc error: code = NotFound desc = could not find container \"32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103\": container with ID starting with 32b00334517561b97227547cc356c5f11ff7b00ec742b08ec54f03dcdb524103 not found: ID does not exist" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.081779 4858 scope.go:117] "RemoveContainer" containerID="85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.083713 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435\": container with ID starting with 85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435 not found: ID does not exist" containerID="85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.083848 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435"} err="failed to get container status \"85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435\": rpc error: code = NotFound desc = could not find container \"85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435\": container with ID starting with 85da42a486bc8c6625d071107c564cf72ccf47fd0539cc8a3234bb53b2d7b435 not found: ID does not exist" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.083940 4858 scope.go:117] "RemoveContainer" containerID="195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.084364 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169\": container with ID starting with 195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169 not found: ID does not exist" containerID="195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.084395 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169"} err="failed to get container status \"195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169\": rpc error: code = NotFound desc = could not find container \"195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169\": container with ID starting with 195f57a523038c65581e25e4fca1b16951b7cd7edef6dfd4a3acb8c755604169 not found: ID does not exist" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.133908 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81d84be8-b4b4-4e29-a94c-fcca489809fb-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.288688 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.318479 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.325549 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326040 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" containerName="heat-cfnapi" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326064 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" containerName="heat-cfnapi" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326086 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-notification-agent" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326095 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-notification-agent" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326108 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-central-agent" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326115 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-central-agent" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326132 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerName="heat-api" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326139 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerName="heat-api" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326150 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="sg-core" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326157 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="sg-core" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326171 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="proxy-httpd" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326177 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="proxy-httpd" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326207 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerName="heat-api" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326216 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerName="heat-api" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326404 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerName="heat-api" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326424 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-central-agent" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326432 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="ceilometer-notification-agent" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326444 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" containerName="heat-cfnapi" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326452 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="proxy-httpd" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326474 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" containerName="sg-core" Dec 05 14:17:32 crc kubenswrapper[4858]: E1205 14:17:32.326682 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" containerName="heat-cfnapi" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326693 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" containerName="heat-cfnapi" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326942 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a64adf2-e7c1-41a6-9e42-ec919354ad16" containerName="heat-api" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.326971 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c788136-a2e0-462f-bda7-a49478e425c7" containerName="heat-cfnapi" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.328545 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.334113 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.334580 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.351121 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438474 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438581 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwcc\" (UniqueName: \"kubernetes.io/projected/8a655a7d-df8c-4d54-8233-dab33dfbc233-kube-api-access-xwwcc\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438638 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-log-httpd\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438665 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-run-httpd\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438686 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-config-data\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438710 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-scripts\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.438733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.540213 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwwcc\" (UniqueName: \"kubernetes.io/projected/8a655a7d-df8c-4d54-8233-dab33dfbc233-kube-api-access-xwwcc\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.540844 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-log-httpd\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.540955 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-run-httpd\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.541058 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-config-data\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.541144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-scripts\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.541225 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.541319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-log-httpd\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.541326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.542158 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-run-httpd\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.546575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.547559 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-config-data\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.554611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-scripts\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.555281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.569405 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwwcc\" (UniqueName: \"kubernetes.io/projected/8a655a7d-df8c-4d54-8233-dab33dfbc233-kube-api-access-xwwcc\") pod \"ceilometer-0\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " pod="openstack/ceilometer-0" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.622348 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.626918 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:17:32 crc kubenswrapper[4858]: I1205 14:17:32.701779 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:33 crc kubenswrapper[4858]: I1205 14:17:33.344376 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:33 crc kubenswrapper[4858]: I1205 14:17:33.916727 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d84be8-b4b4-4e29-a94c-fcca489809fb" path="/var/lib/kubelet/pods/81d84be8-b4b4-4e29-a94c-fcca489809fb/volumes" Dec 05 14:17:33 crc kubenswrapper[4858]: I1205 14:17:33.989588 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerStarted","Data":"f27fbe1b30f1efd8acf5deb8f048cd8d5eb2bf15dc0009bb91642ae5e29b402e"} Dec 05 14:17:33 crc kubenswrapper[4858]: I1205 14:17:33.989642 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerStarted","Data":"80d9b3c82286626d01a7a551423ac2dda4218849334244cf25ef2c7b579b9233"} Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.001327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerStarted","Data":"9a01189ab636d36bf8f3338d71192b54ed80a864a245b21fb12d3b2364efdc0c"} Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.001881 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerStarted","Data":"b4cff5c3b7a81018b6ce33a08894587d83068e67a18b8eb972c5c5682919f942"} Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.162369 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.304537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66fb787db8-jqwt8" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.356994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.411616 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66fd8d549b-n87dk"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.617337 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-6546b"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.618580 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.639585 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6546b"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.706799 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7bdnq"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.708105 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.735838 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcbb580-deba-4812-a820-2170d122b199-operator-scripts\") pod \"nova-api-db-create-6546b\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.735922 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctftf\" (UniqueName: \"kubernetes.io/projected/fdcbb580-deba-4812-a820-2170d122b199-kube-api-access-ctftf\") pod \"nova-api-db-create-6546b\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.756527 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7bdnq"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.840674 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t26rc\" (UniqueName: \"kubernetes.io/projected/960299c2-8250-45a8-a10c-c4ee4b105910-kube-api-access-t26rc\") pod \"nova-cell0-db-create-7bdnq\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.841093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcbb580-deba-4812-a820-2170d122b199-operator-scripts\") pod \"nova-api-db-create-6546b\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.841126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960299c2-8250-45a8-a10c-c4ee4b105910-operator-scripts\") pod \"nova-cell0-db-create-7bdnq\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.841189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctftf\" (UniqueName: \"kubernetes.io/projected/fdcbb580-deba-4812-a820-2170d122b199-kube-api-access-ctftf\") pod \"nova-api-db-create-6546b\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.842516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcbb580-deba-4812-a820-2170d122b199-operator-scripts\") pod \"nova-api-db-create-6546b\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.853327 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-eda8-account-create-update-4d2w5"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.854598 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.863417 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.863691 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctftf\" (UniqueName: \"kubernetes.io/projected/fdcbb580-deba-4812-a820-2170d122b199-kube-api-access-ctftf\") pod \"nova-api-db-create-6546b\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.881338 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eda8-account-create-update-4d2w5"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.935760 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.948639 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960299c2-8250-45a8-a10c-c4ee4b105910-operator-scripts\") pod \"nova-cell0-db-create-7bdnq\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.948801 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfbd339-df73-4eff-adbc-6394489044cd-operator-scripts\") pod \"nova-api-eda8-account-create-update-4d2w5\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.948923 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t26rc\" (UniqueName: \"kubernetes.io/projected/960299c2-8250-45a8-a10c-c4ee4b105910-kube-api-access-t26rc\") pod \"nova-cell0-db-create-7bdnq\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.949094 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqb72\" (UniqueName: \"kubernetes.io/projected/9dfbd339-df73-4eff-adbc-6394489044cd-kube-api-access-xqb72\") pod \"nova-api-eda8-account-create-update-4d2w5\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.949547 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960299c2-8250-45a8-a10c-c4ee4b105910-operator-scripts\") pod \"nova-cell0-db-create-7bdnq\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.954490 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jhglf"] Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.955839 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:35 crc kubenswrapper[4858]: I1205 14:17:35.989083 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jhglf"] Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.015655 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t26rc\" (UniqueName: \"kubernetes.io/projected/960299c2-8250-45a8-a10c-c4ee4b105910-kube-api-access-t26rc\") pod \"nova-cell0-db-create-7bdnq\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.041506 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon-log" containerID="cri-o://01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3" gracePeriod=30 Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.041745 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" containerID="cri-o://2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568" gracePeriod=30 Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.051576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfbd339-df73-4eff-adbc-6394489044cd-operator-scripts\") pod \"nova-api-eda8-account-create-update-4d2w5\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.051635 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2066f614-ad2b-4947-8c14-b9df8e78fcac-operator-scripts\") pod \"nova-cell1-db-create-jhglf\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.051696 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqb72\" (UniqueName: \"kubernetes.io/projected/9dfbd339-df73-4eff-adbc-6394489044cd-kube-api-access-xqb72\") pod \"nova-api-eda8-account-create-update-4d2w5\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.051803 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zbx8\" (UniqueName: \"kubernetes.io/projected/2066f614-ad2b-4947-8c14-b9df8e78fcac-kube-api-access-9zbx8\") pod \"nova-cell1-db-create-jhglf\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.052324 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfbd339-df73-4eff-adbc-6394489044cd-operator-scripts\") pod \"nova-api-eda8-account-create-update-4d2w5\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.064512 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5ee4-account-create-update-l65v4"] Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.065712 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.072220 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.108605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqb72\" (UniqueName: \"kubernetes.io/projected/9dfbd339-df73-4eff-adbc-6394489044cd-kube-api-access-xqb72\") pod \"nova-api-eda8-account-create-update-4d2w5\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.108798 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ee4-account-create-update-l65v4"] Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.157335 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2066f614-ad2b-4947-8c14-b9df8e78fcac-operator-scripts\") pod \"nova-cell1-db-create-jhglf\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.157384 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2455633-0480-46f9-b598-4d12d4414a5a-operator-scripts\") pod \"nova-cell0-5ee4-account-create-update-l65v4\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.157472 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgd98\" (UniqueName: \"kubernetes.io/projected/b2455633-0480-46f9-b598-4d12d4414a5a-kube-api-access-xgd98\") pod \"nova-cell0-5ee4-account-create-update-l65v4\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.157548 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zbx8\" (UniqueName: \"kubernetes.io/projected/2066f614-ad2b-4947-8c14-b9df8e78fcac-kube-api-access-9zbx8\") pod \"nova-cell1-db-create-jhglf\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.158710 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2066f614-ad2b-4947-8c14-b9df8e78fcac-operator-scripts\") pod \"nova-cell1-db-create-jhglf\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.192177 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zbx8\" (UniqueName: \"kubernetes.io/projected/2066f614-ad2b-4947-8c14-b9df8e78fcac-kube-api-access-9zbx8\") pod \"nova-cell1-db-create-jhglf\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.242398 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9138-account-create-update-sj4qg"] Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.244057 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.250087 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.259800 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgd98\" (UniqueName: \"kubernetes.io/projected/b2455633-0480-46f9-b598-4d12d4414a5a-kube-api-access-xgd98\") pod \"nova-cell0-5ee4-account-create-update-l65v4\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.259950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2455633-0480-46f9-b598-4d12d4414a5a-operator-scripts\") pod \"nova-cell0-5ee4-account-create-update-l65v4\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.260671 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2455633-0480-46f9-b598-4d12d4414a5a-operator-scripts\") pod \"nova-cell0-5ee4-account-create-update-l65v4\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.281396 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9138-account-create-update-sj4qg"] Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.299326 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.318279 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgd98\" (UniqueName: \"kubernetes.io/projected/b2455633-0480-46f9-b598-4d12d4414a5a-kube-api-access-xgd98\") pod \"nova-cell0-5ee4-account-create-update-l65v4\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.333410 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.364183 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.366128 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd999106-5891-4eea-8021-c3c7d5899b3f-operator-scripts\") pod \"nova-cell1-9138-account-create-update-sj4qg\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.366206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrlr\" (UniqueName: \"kubernetes.io/projected/dd999106-5891-4eea-8021-c3c7d5899b3f-kube-api-access-gkrlr\") pod \"nova-cell1-9138-account-create-update-sj4qg\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.424873 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.481895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd999106-5891-4eea-8021-c3c7d5899b3f-operator-scripts\") pod \"nova-cell1-9138-account-create-update-sj4qg\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.481977 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrlr\" (UniqueName: \"kubernetes.io/projected/dd999106-5891-4eea-8021-c3c7d5899b3f-kube-api-access-gkrlr\") pod \"nova-cell1-9138-account-create-update-sj4qg\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.483195 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd999106-5891-4eea-8021-c3c7d5899b3f-operator-scripts\") pod \"nova-cell1-9138-account-create-update-sj4qg\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.537554 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrlr\" (UniqueName: \"kubernetes.io/projected/dd999106-5891-4eea-8021-c3c7d5899b3f-kube-api-access-gkrlr\") pod \"nova-cell1-9138-account-create-update-sj4qg\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.567216 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:36 crc kubenswrapper[4858]: I1205 14:17:36.945655 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-6546b"] Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.072012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6546b" event={"ID":"fdcbb580-deba-4812-a820-2170d122b199","Type":"ContainerStarted","Data":"968a8bfd143bca03b98fcee8622e2dc27c15753ce8b5d4291ff60d17cf15435a"} Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.106045 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerStarted","Data":"3784da1e072dc24807791940b836a25d3f8e287453787dfb9cf381329acc2702"} Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.106274 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-central-agent" containerID="cri-o://f27fbe1b30f1efd8acf5deb8f048cd8d5eb2bf15dc0009bb91642ae5e29b402e" gracePeriod=30 Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.106667 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.106695 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="proxy-httpd" containerID="cri-o://3784da1e072dc24807791940b836a25d3f8e287453787dfb9cf381329acc2702" gracePeriod=30 Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.106781 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="sg-core" containerID="cri-o://9a01189ab636d36bf8f3338d71192b54ed80a864a245b21fb12d3b2364efdc0c" gracePeriod=30 Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.106851 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-notification-agent" containerID="cri-o://b4cff5c3b7a81018b6ce33a08894587d83068e67a18b8eb972c5c5682919f942" gracePeriod=30 Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.150733 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8076821819999997 podStartE2EDuration="5.150713919s" podCreationTimestamp="2025-12-05 14:17:32 +0000 UTC" firstStartedPulling="2025-12-05 14:17:33.35868706 +0000 UTC m=+1261.906285199" lastFinishedPulling="2025-12-05 14:17:35.701718797 +0000 UTC m=+1264.249316936" observedRunningTime="2025-12-05 14:17:37.138791376 +0000 UTC m=+1265.686389515" watchObservedRunningTime="2025-12-05 14:17:37.150713919 +0000 UTC m=+1265.698312058" Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.219661 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7bdnq"] Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.357057 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-eda8-account-create-update-4d2w5"] Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.366789 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jhglf"] Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.818047 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9138-account-create-update-sj4qg"] Dec 05 14:17:37 crc kubenswrapper[4858]: I1205 14:17:37.849495 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ee4-account-create-update-l65v4"] Dec 05 14:17:38 crc kubenswrapper[4858]: W1205 14:17:38.018402 4858 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdcbb580_deba_4812_a820_2170d122b199.slice/crio-a6cbc61d4e99c0e43c29c6c38ec09f5ab789de80f109581aefbaeb9284833700.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdcbb580_deba_4812_a820_2170d122b199.slice/crio-a6cbc61d4e99c0e43c29c6c38ec09f5ab789de80f109581aefbaeb9284833700.scope: no such file or directory Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128099 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerID="3784da1e072dc24807791940b836a25d3f8e287453787dfb9cf381329acc2702" exitCode=0 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128125 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerID="9a01189ab636d36bf8f3338d71192b54ed80a864a245b21fb12d3b2364efdc0c" exitCode=2 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128134 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerID="b4cff5c3b7a81018b6ce33a08894587d83068e67a18b8eb972c5c5682919f942" exitCode=0 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128141 4858 generic.go:334] "Generic (PLEG): container finished" podID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerID="f27fbe1b30f1efd8acf5deb8f048cd8d5eb2bf15dc0009bb91642ae5e29b402e" exitCode=0 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerDied","Data":"3784da1e072dc24807791940b836a25d3f8e287453787dfb9cf381329acc2702"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128203 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerDied","Data":"9a01189ab636d36bf8f3338d71192b54ed80a864a245b21fb12d3b2364efdc0c"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128216 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerDied","Data":"b4cff5c3b7a81018b6ce33a08894587d83068e67a18b8eb972c5c5682919f942"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.128226 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerDied","Data":"f27fbe1b30f1efd8acf5deb8f048cd8d5eb2bf15dc0009bb91642ae5e29b402e"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.130108 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" event={"ID":"b2455633-0480-46f9-b598-4d12d4414a5a","Type":"ContainerStarted","Data":"b16cad44376d45f1fa18871999f00ac35ca1958bcbb97144b1795c4d1046d22c"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.134732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jhglf" event={"ID":"2066f614-ad2b-4947-8c14-b9df8e78fcac","Type":"ContainerStarted","Data":"c1bedd768eb4843a65f48083eee24d486ba42f8c7892ce8faa7f2830a88aadfa"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.134809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jhglf" event={"ID":"2066f614-ad2b-4947-8c14-b9df8e78fcac","Type":"ContainerStarted","Data":"5cdccc01a09f16d4de01e04d5be0374dd8724d613fa1ee96ee1457b90148ade9"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.140284 4858 generic.go:334] "Generic (PLEG): container finished" podID="b958f7a4-1b99-4ce8-badb-52855609ec9d" containerID="928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f" exitCode=0 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.140341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7c5f557b4c-fdhxg" event={"ID":"b958f7a4-1b99-4ce8-badb-52855609ec9d","Type":"ContainerDied","Data":"928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.140364 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7c5f557b4c-fdhxg" event={"ID":"b958f7a4-1b99-4ce8-badb-52855609ec9d","Type":"ContainerDied","Data":"58e362b1744068c52c916a61dcecf2031f276f25be9981c0908c42cbc8bff860"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.140376 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58e362b1744068c52c916a61dcecf2031f276f25be9981c0908c42cbc8bff860" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.141677 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eda8-account-create-update-4d2w5" event={"ID":"9dfbd339-df73-4eff-adbc-6394489044cd","Type":"ContainerStarted","Data":"ae7626dabc430e9e77b6b7c9d6d693877ef922da9cba2621b611b8d00334bc3d"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.141703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eda8-account-create-update-4d2w5" event={"ID":"9dfbd339-df73-4eff-adbc-6394489044cd","Type":"ContainerStarted","Data":"9a5f9d9c763c4c736eab3236f82604adbfbf721917f050cef4c88973a6ae1adf"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.143933 4858 generic.go:334] "Generic (PLEG): container finished" podID="fdcbb580-deba-4812-a820-2170d122b199" containerID="a6cbc61d4e99c0e43c29c6c38ec09f5ab789de80f109581aefbaeb9284833700" exitCode=0 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.143983 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6546b" event={"ID":"fdcbb580-deba-4812-a820-2170d122b199","Type":"ContainerDied","Data":"a6cbc61d4e99c0e43c29c6c38ec09f5ab789de80f109581aefbaeb9284833700"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.145261 4858 generic.go:334] "Generic (PLEG): container finished" podID="960299c2-8250-45a8-a10c-c4ee4b105910" containerID="09ff3f037151dae528501d6940ac6e8f4f26f89c9a66b470e4b51de25e3dce8e" exitCode=0 Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.145300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7bdnq" event={"ID":"960299c2-8250-45a8-a10c-c4ee4b105910","Type":"ContainerDied","Data":"09ff3f037151dae528501d6940ac6e8f4f26f89c9a66b470e4b51de25e3dce8e"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.145315 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7bdnq" event={"ID":"960299c2-8250-45a8-a10c-c4ee4b105910","Type":"ContainerStarted","Data":"a7c4cba747a05dfc0f7a2be4fa5918e93d8633cdeaa0272edc990a3eafe1155d"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.148602 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" event={"ID":"dd999106-5891-4eea-8021-c3c7d5899b3f","Type":"ContainerStarted","Data":"60e11ba1aea54569ac008ec85e9fdbbd21cb5a7fc9851f3d36949889ddda502f"} Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.162777 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-jhglf" podStartSLOduration=3.162758974 podStartE2EDuration="3.162758974s" podCreationTimestamp="2025-12-05 14:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:38.159707401 +0000 UTC m=+1266.707305540" watchObservedRunningTime="2025-12-05 14:17:38.162758974 +0000 UTC m=+1266.710357113" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.175480 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.252710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-combined-ca-bundle\") pod \"b958f7a4-1b99-4ce8-badb-52855609ec9d\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.256608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr8hg\" (UniqueName: \"kubernetes.io/projected/b958f7a4-1b99-4ce8-badb-52855609ec9d-kube-api-access-vr8hg\") pod \"b958f7a4-1b99-4ce8-badb-52855609ec9d\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.256953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data-custom\") pod \"b958f7a4-1b99-4ce8-badb-52855609ec9d\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.257004 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data\") pod \"b958f7a4-1b99-4ce8-badb-52855609ec9d\" (UID: \"b958f7a4-1b99-4ce8-badb-52855609ec9d\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.261199 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-eda8-account-create-update-4d2w5" podStartSLOduration=3.261176833 podStartE2EDuration="3.261176833s" podCreationTimestamp="2025-12-05 14:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:17:38.233543036 +0000 UTC m=+1266.781141175" watchObservedRunningTime="2025-12-05 14:17:38.261176833 +0000 UTC m=+1266.808774972" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.276636 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b958f7a4-1b99-4ce8-badb-52855609ec9d-kube-api-access-vr8hg" (OuterVolumeSpecName: "kube-api-access-vr8hg") pod "b958f7a4-1b99-4ce8-badb-52855609ec9d" (UID: "b958f7a4-1b99-4ce8-badb-52855609ec9d"). InnerVolumeSpecName "kube-api-access-vr8hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.285955 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b958f7a4-1b99-4ce8-badb-52855609ec9d" (UID: "b958f7a4-1b99-4ce8-badb-52855609ec9d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.335011 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b958f7a4-1b99-4ce8-badb-52855609ec9d" (UID: "b958f7a4-1b99-4ce8-badb-52855609ec9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.374322 4858 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.374361 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.374374 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr8hg\" (UniqueName: \"kubernetes.io/projected/b958f7a4-1b99-4ce8-badb-52855609ec9d-kube-api-access-vr8hg\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.454561 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data" (OuterVolumeSpecName: "config-data") pod "b958f7a4-1b99-4ce8-badb-52855609ec9d" (UID: "b958f7a4-1b99-4ce8-badb-52855609ec9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.475671 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.475758 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b958f7a4-1b99-4ce8-badb-52855609ec9d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577051 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-sg-core-conf-yaml\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577086 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-scripts\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-run-httpd\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577217 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-combined-ca-bundle\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwwcc\" (UniqueName: \"kubernetes.io/projected/8a655a7d-df8c-4d54-8233-dab33dfbc233-kube-api-access-xwwcc\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577334 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-config-data\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.577365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-log-httpd\") pod \"8a655a7d-df8c-4d54-8233-dab33dfbc233\" (UID: \"8a655a7d-df8c-4d54-8233-dab33dfbc233\") " Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.578222 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.580311 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.586624 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-scripts" (OuterVolumeSpecName: "scripts") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.596792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a655a7d-df8c-4d54-8233-dab33dfbc233-kube-api-access-xwwcc" (OuterVolumeSpecName: "kube-api-access-xwwcc") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "kube-api-access-xwwcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.644193 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.680101 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwwcc\" (UniqueName: \"kubernetes.io/projected/8a655a7d-df8c-4d54-8233-dab33dfbc233-kube-api-access-xwwcc\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.680148 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.680158 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.680166 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.680175 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a655a7d-df8c-4d54-8233-dab33dfbc233-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.711013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.779709 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-config-data" (OuterVolumeSpecName: "config-data") pod "8a655a7d-df8c-4d54-8233-dab33dfbc233" (UID: "8a655a7d-df8c-4d54-8233-dab33dfbc233"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.781425 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:38 crc kubenswrapper[4858]: I1205 14:17:38.781459 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a655a7d-df8c-4d54-8233-dab33dfbc233-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.159103 4858 generic.go:334] "Generic (PLEG): container finished" podID="b2455633-0480-46f9-b598-4d12d4414a5a" containerID="f64ac43eb49d540b2c784c0f48cdae07eb2d65d75a2c20375089766bb67f9c4c" exitCode=0 Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.159191 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" event={"ID":"b2455633-0480-46f9-b598-4d12d4414a5a","Type":"ContainerDied","Data":"f64ac43eb49d540b2c784c0f48cdae07eb2d65d75a2c20375089766bb67f9c4c"} Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.161276 4858 generic.go:334] "Generic (PLEG): container finished" podID="2066f614-ad2b-4947-8c14-b9df8e78fcac" containerID="c1bedd768eb4843a65f48083eee24d486ba42f8c7892ce8faa7f2830a88aadfa" exitCode=0 Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.161327 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jhglf" event={"ID":"2066f614-ad2b-4947-8c14-b9df8e78fcac","Type":"ContainerDied","Data":"c1bedd768eb4843a65f48083eee24d486ba42f8c7892ce8faa7f2830a88aadfa"} Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.163209 4858 generic.go:334] "Generic (PLEG): container finished" podID="9dfbd339-df73-4eff-adbc-6394489044cd" containerID="ae7626dabc430e9e77b6b7c9d6d693877ef922da9cba2621b611b8d00334bc3d" exitCode=0 Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.163256 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eda8-account-create-update-4d2w5" event={"ID":"9dfbd339-df73-4eff-adbc-6394489044cd","Type":"ContainerDied","Data":"ae7626dabc430e9e77b6b7c9d6d693877ef922da9cba2621b611b8d00334bc3d"} Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.164854 4858 generic.go:334] "Generic (PLEG): container finished" podID="dd999106-5891-4eea-8021-c3c7d5899b3f" containerID="2238984a99460a7f402cc398cbe550c56546492b223d89eecd5105000ea2ba30" exitCode=0 Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.164882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" event={"ID":"dd999106-5891-4eea-8021-c3c7d5899b3f","Type":"ContainerDied","Data":"2238984a99460a7f402cc398cbe550c56546492b223d89eecd5105000ea2ba30"} Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.167708 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.167724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a655a7d-df8c-4d54-8233-dab33dfbc233","Type":"ContainerDied","Data":"80d9b3c82286626d01a7a551423ac2dda4218849334244cf25ef2c7b579b9233"} Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.167754 4858 scope.go:117] "RemoveContainer" containerID="3784da1e072dc24807791940b836a25d3f8e287453787dfb9cf381329acc2702" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.167713 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7c5f557b4c-fdhxg" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.215238 4858 scope.go:117] "RemoveContainer" containerID="9a01189ab636d36bf8f3338d71192b54ed80a864a245b21fb12d3b2364efdc0c" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.223571 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7c5f557b4c-fdhxg"] Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.248320 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7c5f557b4c-fdhxg"] Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.252088 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:60628->10.217.0.149:8443: read: connection reset by peer" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.287747 4858 scope.go:117] "RemoveContainer" containerID="b4cff5c3b7a81018b6ce33a08894587d83068e67a18b8eb972c5c5682919f942" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.371493 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.378998 4858 scope.go:117] "RemoveContainer" containerID="f27fbe1b30f1efd8acf5deb8f048cd8d5eb2bf15dc0009bb91642ae5e29b402e" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.387176 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.395877 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:39 crc kubenswrapper[4858]: E1205 14:17:39.396280 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b958f7a4-1b99-4ce8-badb-52855609ec9d" containerName="heat-engine" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396292 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b958f7a4-1b99-4ce8-badb-52855609ec9d" containerName="heat-engine" Dec 05 14:17:39 crc kubenswrapper[4858]: E1205 14:17:39.396308 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="sg-core" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396314 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="sg-core" Dec 05 14:17:39 crc kubenswrapper[4858]: E1205 14:17:39.396329 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-notification-agent" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396335 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-notification-agent" Dec 05 14:17:39 crc kubenswrapper[4858]: E1205 14:17:39.396356 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="proxy-httpd" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396362 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="proxy-httpd" Dec 05 14:17:39 crc kubenswrapper[4858]: E1205 14:17:39.396372 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-central-agent" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396379 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-central-agent" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396593 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="proxy-httpd" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396612 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-central-agent" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396625 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="sg-core" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396633 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b958f7a4-1b99-4ce8-badb-52855609ec9d" containerName="heat-engine" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.396643 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" containerName="ceilometer-notification-agent" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.398392 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.402764 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.403099 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.412694 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519346 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-log-httpd\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519769 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-scripts\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519844 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-config-data\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519865 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8bb\" (UniqueName: \"kubernetes.io/projected/75f57de3-ea73-4932-8de3-acc549e6df55-kube-api-access-xd8bb\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.519913 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-run-httpd\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621221 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-scripts\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-config-data\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8bb\" (UniqueName: \"kubernetes.io/projected/75f57de3-ea73-4932-8de3-acc549e6df55-kube-api-access-xd8bb\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-run-httpd\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-log-httpd\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621421 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.621479 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.625529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-log-httpd\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.629207 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-run-httpd\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.631242 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-scripts\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.639276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.639472 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.640941 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-config-data\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.649432 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8bb\" (UniqueName: \"kubernetes.io/projected/75f57de3-ea73-4932-8de3-acc549e6df55-kube-api-access-xd8bb\") pod \"ceilometer-0\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.651122 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.723892 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.827297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t26rc\" (UniqueName: \"kubernetes.io/projected/960299c2-8250-45a8-a10c-c4ee4b105910-kube-api-access-t26rc\") pod \"960299c2-8250-45a8-a10c-c4ee4b105910\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.827350 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960299c2-8250-45a8-a10c-c4ee4b105910-operator-scripts\") pod \"960299c2-8250-45a8-a10c-c4ee4b105910\" (UID: \"960299c2-8250-45a8-a10c-c4ee4b105910\") " Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.828708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960299c2-8250-45a8-a10c-c4ee4b105910-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "960299c2-8250-45a8-a10c-c4ee4b105910" (UID: "960299c2-8250-45a8-a10c-c4ee4b105910"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.832105 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960299c2-8250-45a8-a10c-c4ee4b105910-kube-api-access-t26rc" (OuterVolumeSpecName: "kube-api-access-t26rc") pod "960299c2-8250-45a8-a10c-c4ee4b105910" (UID: "960299c2-8250-45a8-a10c-c4ee4b105910"). InnerVolumeSpecName "kube-api-access-t26rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.898510 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.920213 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a655a7d-df8c-4d54-8233-dab33dfbc233" path="/var/lib/kubelet/pods/8a655a7d-df8c-4d54-8233-dab33dfbc233/volumes" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.921150 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b958f7a4-1b99-4ce8-badb-52855609ec9d" path="/var/lib/kubelet/pods/b958f7a4-1b99-4ce8-badb-52855609ec9d/volumes" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.929323 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t26rc\" (UniqueName: \"kubernetes.io/projected/960299c2-8250-45a8-a10c-c4ee4b105910-kube-api-access-t26rc\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:39 crc kubenswrapper[4858]: I1205 14:17:39.929358 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/960299c2-8250-45a8-a10c-c4ee4b105910-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.033360 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcbb580-deba-4812-a820-2170d122b199-operator-scripts\") pod \"fdcbb580-deba-4812-a820-2170d122b199\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.033600 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctftf\" (UniqueName: \"kubernetes.io/projected/fdcbb580-deba-4812-a820-2170d122b199-kube-api-access-ctftf\") pod \"fdcbb580-deba-4812-a820-2170d122b199\" (UID: \"fdcbb580-deba-4812-a820-2170d122b199\") " Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.033996 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdcbb580-deba-4812-a820-2170d122b199-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdcbb580-deba-4812-a820-2170d122b199" (UID: "fdcbb580-deba-4812-a820-2170d122b199"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.035586 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdcbb580-deba-4812-a820-2170d122b199-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.049927 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcbb580-deba-4812-a820-2170d122b199-kube-api-access-ctftf" (OuterVolumeSpecName: "kube-api-access-ctftf") pod "fdcbb580-deba-4812-a820-2170d122b199" (UID: "fdcbb580-deba-4812-a820-2170d122b199"). InnerVolumeSpecName "kube-api-access-ctftf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.137876 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctftf\" (UniqueName: \"kubernetes.io/projected/fdcbb580-deba-4812-a820-2170d122b199-kube-api-access-ctftf\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.184201 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-6546b" event={"ID":"fdcbb580-deba-4812-a820-2170d122b199","Type":"ContainerDied","Data":"968a8bfd143bca03b98fcee8622e2dc27c15753ce8b5d4291ff60d17cf15435a"} Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.185254 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968a8bfd143bca03b98fcee8622e2dc27c15753ce8b5d4291ff60d17cf15435a" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.185413 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-6546b" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.192367 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerID="2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568" exitCode=0 Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.192568 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerDied","Data":"2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568"} Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.192619 4858 scope.go:117] "RemoveContainer" containerID="61f7dd3bef7baaad01301f499bc946fc6b7f67a00416e4a5dc1f0bf9d190b0df" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.198549 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7bdnq" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.199157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7bdnq" event={"ID":"960299c2-8250-45a8-a10c-c4ee4b105910","Type":"ContainerDied","Data":"a7c4cba747a05dfc0f7a2be4fa5918e93d8633cdeaa0272edc990a3eafe1155d"} Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.199199 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c4cba747a05dfc0f7a2be4fa5918e93d8633cdeaa0272edc990a3eafe1155d" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.390250 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.651335 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.756031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2066f614-ad2b-4947-8c14-b9df8e78fcac-operator-scripts\") pod \"2066f614-ad2b-4947-8c14-b9df8e78fcac\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.756475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zbx8\" (UniqueName: \"kubernetes.io/projected/2066f614-ad2b-4947-8c14-b9df8e78fcac-kube-api-access-9zbx8\") pod \"2066f614-ad2b-4947-8c14-b9df8e78fcac\" (UID: \"2066f614-ad2b-4947-8c14-b9df8e78fcac\") " Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.757697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2066f614-ad2b-4947-8c14-b9df8e78fcac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2066f614-ad2b-4947-8c14-b9df8e78fcac" (UID: "2066f614-ad2b-4947-8c14-b9df8e78fcac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.791101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2066f614-ad2b-4947-8c14-b9df8e78fcac-kube-api-access-9zbx8" (OuterVolumeSpecName: "kube-api-access-9zbx8") pod "2066f614-ad2b-4947-8c14-b9df8e78fcac" (UID: "2066f614-ad2b-4947-8c14-b9df8e78fcac"). InnerVolumeSpecName "kube-api-access-9zbx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.860083 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2066f614-ad2b-4947-8c14-b9df8e78fcac-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:40 crc kubenswrapper[4858]: I1205 14:17:40.860120 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zbx8\" (UniqueName: \"kubernetes.io/projected/2066f614-ad2b-4947-8c14-b9df8e78fcac-kube-api-access-9zbx8\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.100510 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.109582 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.112533 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.172904 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfbd339-df73-4eff-adbc-6394489044cd-operator-scripts\") pod \"9dfbd339-df73-4eff-adbc-6394489044cd\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.173628 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqb72\" (UniqueName: \"kubernetes.io/projected/9dfbd339-df73-4eff-adbc-6394489044cd-kube-api-access-xqb72\") pod \"9dfbd339-df73-4eff-adbc-6394489044cd\" (UID: \"9dfbd339-df73-4eff-adbc-6394489044cd\") " Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.174652 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dfbd339-df73-4eff-adbc-6394489044cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9dfbd339-df73-4eff-adbc-6394489044cd" (UID: "9dfbd339-df73-4eff-adbc-6394489044cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.178554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfbd339-df73-4eff-adbc-6394489044cd-kube-api-access-xqb72" (OuterVolumeSpecName: "kube-api-access-xqb72") pod "9dfbd339-df73-4eff-adbc-6394489044cd" (UID: "9dfbd339-df73-4eff-adbc-6394489044cd"). InnerVolumeSpecName "kube-api-access-xqb72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.218578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" event={"ID":"b2455633-0480-46f9-b598-4d12d4414a5a","Type":"ContainerDied","Data":"b16cad44376d45f1fa18871999f00ac35ca1958bcbb97144b1795c4d1046d22c"} Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.218615 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b16cad44376d45f1fa18871999f00ac35ca1958bcbb97144b1795c4d1046d22c" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.218672 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ee4-account-create-update-l65v4" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.222740 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jhglf" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.227696 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jhglf" event={"ID":"2066f614-ad2b-4947-8c14-b9df8e78fcac","Type":"ContainerDied","Data":"5cdccc01a09f16d4de01e04d5be0374dd8724d613fa1ee96ee1457b90148ade9"} Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.227755 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cdccc01a09f16d4de01e04d5be0374dd8724d613fa1ee96ee1457b90148ade9" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.231442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-eda8-account-create-update-4d2w5" event={"ID":"9dfbd339-df73-4eff-adbc-6394489044cd","Type":"ContainerDied","Data":"9a5f9d9c763c4c736eab3236f82604adbfbf721917f050cef4c88973a6ae1adf"} Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.231843 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5f9d9c763c4c736eab3236f82604adbfbf721917f050cef4c88973a6ae1adf" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.231558 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-eda8-account-create-update-4d2w5" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.240294 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerStarted","Data":"85a90da60685067812648d7bd3af8dfceda1e5dea3508bbb7da20b0873fdf0d9"} Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.240341 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerStarted","Data":"e3857fd3cbb04fd3fce7fd5367cb6b59e25db3a263d60d95e07436c4cf79371c"} Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.242404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" event={"ID":"dd999106-5891-4eea-8021-c3c7d5899b3f","Type":"ContainerDied","Data":"60e11ba1aea54569ac008ec85e9fdbbd21cb5a7fc9851f3d36949889ddda502f"} Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.242561 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e11ba1aea54569ac008ec85e9fdbbd21cb5a7fc9851f3d36949889ddda502f" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.242681 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9138-account-create-update-sj4qg" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.279202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2455633-0480-46f9-b598-4d12d4414a5a-operator-scripts\") pod \"b2455633-0480-46f9-b598-4d12d4414a5a\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.279299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgd98\" (UniqueName: \"kubernetes.io/projected/b2455633-0480-46f9-b598-4d12d4414a5a-kube-api-access-xgd98\") pod \"b2455633-0480-46f9-b598-4d12d4414a5a\" (UID: \"b2455633-0480-46f9-b598-4d12d4414a5a\") " Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.279417 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkrlr\" (UniqueName: \"kubernetes.io/projected/dd999106-5891-4eea-8021-c3c7d5899b3f-kube-api-access-gkrlr\") pod \"dd999106-5891-4eea-8021-c3c7d5899b3f\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.279703 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd999106-5891-4eea-8021-c3c7d5899b3f-operator-scripts\") pod \"dd999106-5891-4eea-8021-c3c7d5899b3f\" (UID: \"dd999106-5891-4eea-8021-c3c7d5899b3f\") " Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.280172 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9dfbd339-df73-4eff-adbc-6394489044cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.280185 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqb72\" (UniqueName: \"kubernetes.io/projected/9dfbd339-df73-4eff-adbc-6394489044cd-kube-api-access-xqb72\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.280588 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd999106-5891-4eea-8021-c3c7d5899b3f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd999106-5891-4eea-8021-c3c7d5899b3f" (UID: "dd999106-5891-4eea-8021-c3c7d5899b3f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.282733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2455633-0480-46f9-b598-4d12d4414a5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2455633-0480-46f9-b598-4d12d4414a5a" (UID: "b2455633-0480-46f9-b598-4d12d4414a5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.285541 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2455633-0480-46f9-b598-4d12d4414a5a-kube-api-access-xgd98" (OuterVolumeSpecName: "kube-api-access-xgd98") pod "b2455633-0480-46f9-b598-4d12d4414a5a" (UID: "b2455633-0480-46f9-b598-4d12d4414a5a"). InnerVolumeSpecName "kube-api-access-xgd98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.286926 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd999106-5891-4eea-8021-c3c7d5899b3f-kube-api-access-gkrlr" (OuterVolumeSpecName: "kube-api-access-gkrlr") pod "dd999106-5891-4eea-8021-c3c7d5899b3f" (UID: "dd999106-5891-4eea-8021-c3c7d5899b3f"). InnerVolumeSpecName "kube-api-access-gkrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.382170 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2455633-0480-46f9-b598-4d12d4414a5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.382208 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgd98\" (UniqueName: \"kubernetes.io/projected/b2455633-0480-46f9-b598-4d12d4414a5a-kube-api-access-xgd98\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.382222 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkrlr\" (UniqueName: \"kubernetes.io/projected/dd999106-5891-4eea-8021-c3c7d5899b3f-kube-api-access-gkrlr\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:41 crc kubenswrapper[4858]: I1205 14:17:41.382233 4858 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd999106-5891-4eea-8021-c3c7d5899b3f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:17:42 crc kubenswrapper[4858]: I1205 14:17:42.251173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerStarted","Data":"0d7098df3d34e24ec33675265c21e9691eb901db90aa61c45d416b48f2de15a3"} Dec 05 14:17:43 crc kubenswrapper[4858]: I1205 14:17:43.261704 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerStarted","Data":"5768407d220bd7755b56e82c0bb668e44ff572d98737280897a49b065d108733"} Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.273190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerStarted","Data":"bd261ac717e368f7591c7f99af6e3f8047b6ab9d63f5ef6cefb05c7e722a5402"} Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.273666 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.301654 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.510392705 podStartE2EDuration="5.301638423s" podCreationTimestamp="2025-12-05 14:17:39 +0000 UTC" firstStartedPulling="2025-12-05 14:17:40.466005836 +0000 UTC m=+1269.013603975" lastFinishedPulling="2025-12-05 14:17:43.257251534 +0000 UTC m=+1271.804849693" observedRunningTime="2025-12-05 14:17:44.298861198 +0000 UTC m=+1272.846459337" watchObservedRunningTime="2025-12-05 14:17:44.301638423 +0000 UTC m=+1272.849236562" Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.759932 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.760003 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.760054 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.760889 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"472064fae0079b1bc994525982e709b1ab2bd1dccaa9fb9d8e2cbb9dfa8c4695"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:17:44 crc kubenswrapper[4858]: I1205 14:17:44.760955 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://472064fae0079b1bc994525982e709b1ab2bd1dccaa9fb9d8e2cbb9dfa8c4695" gracePeriod=600 Dec 05 14:17:45 crc kubenswrapper[4858]: I1205 14:17:45.285448 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="472064fae0079b1bc994525982e709b1ab2bd1dccaa9fb9d8e2cbb9dfa8c4695" exitCode=0 Dec 05 14:17:45 crc kubenswrapper[4858]: I1205 14:17:45.285507 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"472064fae0079b1bc994525982e709b1ab2bd1dccaa9fb9d8e2cbb9dfa8c4695"} Dec 05 14:17:45 crc kubenswrapper[4858]: I1205 14:17:45.286095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"b8424605d2464ee3ef0a69ac56cbc16766cf5b070918dfe5d9640a4a043f1721"} Dec 05 14:17:45 crc kubenswrapper[4858]: I1205 14:17:45.286115 4858 scope.go:117] "RemoveContainer" containerID="e5e80f882b080532d912d4ccb8829cb93a92e3352e086e2ac39b582773b7cafa" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.374234 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmsdb"] Dec 05 14:17:46 crc kubenswrapper[4858]: E1205 14:17:46.376030 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcbb580-deba-4812-a820-2170d122b199" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.376125 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcbb580-deba-4812-a820-2170d122b199" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: E1205 14:17:46.376193 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2066f614-ad2b-4947-8c14-b9df8e78fcac" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.376251 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2066f614-ad2b-4947-8c14-b9df8e78fcac" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: E1205 14:17:46.376325 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2455633-0480-46f9-b598-4d12d4414a5a" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.376382 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2455633-0480-46f9-b598-4d12d4414a5a" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: E1205 14:17:46.376478 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960299c2-8250-45a8-a10c-c4ee4b105910" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.376552 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="960299c2-8250-45a8-a10c-c4ee4b105910" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: E1205 14:17:46.376616 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfbd339-df73-4eff-adbc-6394489044cd" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.376671 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfbd339-df73-4eff-adbc-6394489044cd" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: E1205 14:17:46.376733 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd999106-5891-4eea-8021-c3c7d5899b3f" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.376790 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd999106-5891-4eea-8021-c3c7d5899b3f" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.377037 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfbd339-df73-4eff-adbc-6394489044cd" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.377107 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcbb580-deba-4812-a820-2170d122b199" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.377171 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2455633-0480-46f9-b598-4d12d4414a5a" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.377235 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="960299c2-8250-45a8-a10c-c4ee4b105910" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.377292 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd999106-5891-4eea-8021-c3c7d5899b3f" containerName="mariadb-account-create-update" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.377357 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2066f614-ad2b-4947-8c14-b9df8e78fcac" containerName="mariadb-database-create" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.378082 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.385338 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.385497 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.388017 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kb6n6" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.396514 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmsdb"] Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.476971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-config-data\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.477013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sw9x\" (UniqueName: \"kubernetes.io/projected/eab50221-12a4-4a60-910e-d020c85a5e7a-kube-api-access-4sw9x\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.477189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.477248 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-scripts\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.579316 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.579433 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-scripts\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.579470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-config-data\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.579487 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sw9x\" (UniqueName: \"kubernetes.io/projected/eab50221-12a4-4a60-910e-d020c85a5e7a-kube-api-access-4sw9x\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.586053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.592753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-scripts\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.600772 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-config-data\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.613689 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sw9x\" (UniqueName: \"kubernetes.io/projected/eab50221-12a4-4a60-910e-d020c85a5e7a-kube-api-access-4sw9x\") pod \"nova-cell0-conductor-db-sync-bmsdb\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:46 crc kubenswrapper[4858]: I1205 14:17:46.697815 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:17:47 crc kubenswrapper[4858]: I1205 14:17:47.218318 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmsdb"] Dec 05 14:17:47 crc kubenswrapper[4858]: I1205 14:17:47.307396 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" event={"ID":"eab50221-12a4-4a60-910e-d020c85a5e7a","Type":"ContainerStarted","Data":"d115d88c6e35ee4cc70a99a4b938aa246e96f243afd4561ea240ec87b1d772ce"} Dec 05 14:17:48 crc kubenswrapper[4858]: I1205 14:17:48.654213 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:17:56 crc kubenswrapper[4858]: I1205 14:17:56.388112 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" event={"ID":"eab50221-12a4-4a60-910e-d020c85a5e7a","Type":"ContainerStarted","Data":"9b82f2f0117c9462f2939ab45183c7a185d7bec6b48b4c77704b23e6abd9c29b"} Dec 05 14:17:56 crc kubenswrapper[4858]: I1205 14:17:56.414013 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" podStartSLOduration=1.568920654 podStartE2EDuration="10.413991474s" podCreationTimestamp="2025-12-05 14:17:46 +0000 UTC" firstStartedPulling="2025-12-05 14:17:47.221112526 +0000 UTC m=+1275.768710665" lastFinishedPulling="2025-12-05 14:17:56.066183346 +0000 UTC m=+1284.613781485" observedRunningTime="2025-12-05 14:17:56.409009629 +0000 UTC m=+1284.956607768" watchObservedRunningTime="2025-12-05 14:17:56.413991474 +0000 UTC m=+1284.961589613" Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.544780 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.545227 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-central-agent" containerID="cri-o://85a90da60685067812648d7bd3af8dfceda1e5dea3508bbb7da20b0873fdf0d9" gracePeriod=30 Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.545641 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="proxy-httpd" containerID="cri-o://bd261ac717e368f7591c7f99af6e3f8047b6ab9d63f5ef6cefb05c7e722a5402" gracePeriod=30 Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.545687 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="sg-core" containerID="cri-o://5768407d220bd7755b56e82c0bb668e44ff572d98737280897a49b065d108733" gracePeriod=30 Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.545716 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-notification-agent" containerID="cri-o://0d7098df3d34e24ec33675265c21e9691eb901db90aa61c45d416b48f2de15a3" gracePeriod=30 Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.571197 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.658602 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66fd8d549b-n87dk" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 05 14:17:58 crc kubenswrapper[4858]: I1205 14:17:58.658734 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418521 4858 generic.go:334] "Generic (PLEG): container finished" podID="75f57de3-ea73-4932-8de3-acc549e6df55" containerID="bd261ac717e368f7591c7f99af6e3f8047b6ab9d63f5ef6cefb05c7e722a5402" exitCode=0 Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418558 4858 generic.go:334] "Generic (PLEG): container finished" podID="75f57de3-ea73-4932-8de3-acc549e6df55" containerID="5768407d220bd7755b56e82c0bb668e44ff572d98737280897a49b065d108733" exitCode=2 Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418567 4858 generic.go:334] "Generic (PLEG): container finished" podID="75f57de3-ea73-4932-8de3-acc549e6df55" containerID="0d7098df3d34e24ec33675265c21e9691eb901db90aa61c45d416b48f2de15a3" exitCode=0 Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418577 4858 generic.go:334] "Generic (PLEG): container finished" podID="75f57de3-ea73-4932-8de3-acc549e6df55" containerID="85a90da60685067812648d7bd3af8dfceda1e5dea3508bbb7da20b0873fdf0d9" exitCode=0 Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerDied","Data":"bd261ac717e368f7591c7f99af6e3f8047b6ab9d63f5ef6cefb05c7e722a5402"} Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerDied","Data":"5768407d220bd7755b56e82c0bb668e44ff572d98737280897a49b065d108733"} Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418641 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerDied","Data":"0d7098df3d34e24ec33675265c21e9691eb901db90aa61c45d416b48f2de15a3"} Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.418653 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerDied","Data":"85a90da60685067812648d7bd3af8dfceda1e5dea3508bbb7da20b0873fdf0d9"} Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.725701 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.922949 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-config-data\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-log-httpd\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923059 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8bb\" (UniqueName: \"kubernetes.io/projected/75f57de3-ea73-4932-8de3-acc549e6df55-kube-api-access-xd8bb\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-combined-ca-bundle\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-sg-core-conf-yaml\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923157 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-run-httpd\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-scripts\") pod \"75f57de3-ea73-4932-8de3-acc549e6df55\" (UID: \"75f57de3-ea73-4932-8de3-acc549e6df55\") " Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.923935 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.924162 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.929665 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f57de3-ea73-4932-8de3-acc549e6df55-kube-api-access-xd8bb" (OuterVolumeSpecName: "kube-api-access-xd8bb") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "kube-api-access-xd8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.930570 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-scripts" (OuterVolumeSpecName: "scripts") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:17:59 crc kubenswrapper[4858]: I1205 14:17:59.954772 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.025903 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.025942 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8bb\" (UniqueName: \"kubernetes.io/projected/75f57de3-ea73-4932-8de3-acc549e6df55-kube-api-access-xd8bb\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.025954 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.025968 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f57de3-ea73-4932-8de3-acc549e6df55-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.025980 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.034090 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-config-data" (OuterVolumeSpecName: "config-data") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.049042 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75f57de3-ea73-4932-8de3-acc549e6df55" (UID: "75f57de3-ea73-4932-8de3-acc549e6df55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.128034 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.128333 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f57de3-ea73-4932-8de3-acc549e6df55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.428237 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f57de3-ea73-4932-8de3-acc549e6df55","Type":"ContainerDied","Data":"e3857fd3cbb04fd3fce7fd5367cb6b59e25db3a263d60d95e07436c4cf79371c"} Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.428286 4858 scope.go:117] "RemoveContainer" containerID="bd261ac717e368f7591c7f99af6e3f8047b6ab9d63f5ef6cefb05c7e722a5402" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.428402 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.456135 4858 scope.go:117] "RemoveContainer" containerID="5768407d220bd7755b56e82c0bb668e44ff572d98737280897a49b065d108733" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.476609 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.488354 4858 scope.go:117] "RemoveContainer" containerID="0d7098df3d34e24ec33675265c21e9691eb901db90aa61c45d416b48f2de15a3" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.496713 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.506304 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:00 crc kubenswrapper[4858]: E1205 14:18:00.506961 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="proxy-httpd" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.507045 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="proxy-httpd" Dec 05 14:18:00 crc kubenswrapper[4858]: E1205 14:18:00.507131 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-central-agent" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.507199 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-central-agent" Dec 05 14:18:00 crc kubenswrapper[4858]: E1205 14:18:00.507294 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="sg-core" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.507361 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="sg-core" Dec 05 14:18:00 crc kubenswrapper[4858]: E1205 14:18:00.507448 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-notification-agent" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.507526 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-notification-agent" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.507862 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-notification-agent" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.507954 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="proxy-httpd" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.508044 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="ceilometer-central-agent" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.508147 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" containerName="sg-core" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.510904 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.540327 4858 scope.go:117] "RemoveContainer" containerID="85a90da60685067812648d7bd3af8dfceda1e5dea3508bbb7da20b0873fdf0d9" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-config-data\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542294 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-run-httpd\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-log-httpd\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542443 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542527 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lbc\" (UniqueName: \"kubernetes.io/projected/3fd12e39-94e7-4a8c-9c85-1c856b627d26-kube-api-access-b4lbc\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542807 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-scripts\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.542951 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.543107 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.555555 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-run-httpd\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644062 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-log-httpd\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644138 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4lbc\" (UniqueName: \"kubernetes.io/projected/3fd12e39-94e7-4a8c-9c85-1c856b627d26-kube-api-access-b4lbc\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644197 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-scripts\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-run-httpd\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644871 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-log-httpd\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.644973 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.645193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-config-data\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.650130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-scripts\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.650706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.651375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.651930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-config-data\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.658516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4lbc\" (UniqueName: \"kubernetes.io/projected/3fd12e39-94e7-4a8c-9c85-1c856b627d26-kube-api-access-b4lbc\") pod \"ceilometer-0\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " pod="openstack/ceilometer-0" Dec 05 14:18:00 crc kubenswrapper[4858]: I1205 14:18:00.885112 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:01 crc kubenswrapper[4858]: I1205 14:18:01.474429 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:01 crc kubenswrapper[4858]: I1205 14:18:01.918875 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f57de3-ea73-4932-8de3-acc549e6df55" path="/var/lib/kubelet/pods/75f57de3-ea73-4932-8de3-acc549e6df55/volumes" Dec 05 14:18:02 crc kubenswrapper[4858]: I1205 14:18:02.461731 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerStarted","Data":"2b78e69fed336bbecf56645ac69c29aff6f4e213a0f9360a75f30a78d4877cab"} Dec 05 14:18:02 crc kubenswrapper[4858]: I1205 14:18:02.462070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerStarted","Data":"7d960a027dcf2d2d7c755f28bcec1ac014c2d45d6db063d1492ae53264d36286"} Dec 05 14:18:02 crc kubenswrapper[4858]: I1205 14:18:02.462088 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerStarted","Data":"3f79c46b7b2a8ce452abf6143c1019dc99dac0312df85d6170852124af06aee7"} Dec 05 14:18:03 crc kubenswrapper[4858]: I1205 14:18:03.473764 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerStarted","Data":"dd6413a5032d14e46e23cec222f6facf2670e84ea77770b8f52b404d94438bf5"} Dec 05 14:18:04 crc kubenswrapper[4858]: I1205 14:18:04.488459 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerStarted","Data":"e717e69a17a83bca67e86ff6ba4b2e2689151badca6ccbc1d867d191c2d5019d"} Dec 05 14:18:04 crc kubenswrapper[4858]: I1205 14:18:04.488945 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:18:04 crc kubenswrapper[4858]: I1205 14:18:04.510978 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.353568259 podStartE2EDuration="4.510959031s" podCreationTimestamp="2025-12-05 14:18:00 +0000 UTC" firstStartedPulling="2025-12-05 14:18:01.48349232 +0000 UTC m=+1290.031090459" lastFinishedPulling="2025-12-05 14:18:03.640883092 +0000 UTC m=+1292.188481231" observedRunningTime="2025-12-05 14:18:04.508592817 +0000 UTC m=+1293.056190956" watchObservedRunningTime="2025-12-05 14:18:04.510959031 +0000 UTC m=+1293.058557170" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.482280 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.513561 4858 generic.go:334] "Generic (PLEG): container finished" podID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerID="01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3" exitCode=137 Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.513611 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerDied","Data":"01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3"} Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.513640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66fd8d549b-n87dk" event={"ID":"f4e91f9c-4d1e-4765-b609-32b5531066bf","Type":"ContainerDied","Data":"30e2442e77542139b9d497cb150822d742443cdd973da72b02b3225afb2ac138"} Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.513662 4858 scope.go:117] "RemoveContainer" containerID="2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.513710 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66fd8d549b-n87dk" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680369 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-secret-key\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-scripts\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680553 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-tls-certs\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680599 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e91f9c-4d1e-4765-b609-32b5531066bf-logs\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680616 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-combined-ca-bundle\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-config-data\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.680724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzp89\" (UniqueName: \"kubernetes.io/projected/f4e91f9c-4d1e-4765-b609-32b5531066bf-kube-api-access-wzp89\") pod \"f4e91f9c-4d1e-4765-b609-32b5531066bf\" (UID: \"f4e91f9c-4d1e-4765-b609-32b5531066bf\") " Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.681160 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4e91f9c-4d1e-4765-b609-32b5531066bf-logs" (OuterVolumeSpecName: "logs") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.702184 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.702291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e91f9c-4d1e-4765-b609-32b5531066bf-kube-api-access-wzp89" (OuterVolumeSpecName: "kube-api-access-wzp89") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "kube-api-access-wzp89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.711368 4858 scope.go:117] "RemoveContainer" containerID="01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.719292 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.719691 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-scripts" (OuterVolumeSpecName: "scripts") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.727792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-config-data" (OuterVolumeSpecName: "config-data") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.761436 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f4e91f9c-4d1e-4765-b609-32b5531066bf" (UID: "f4e91f9c-4d1e-4765-b609-32b5531066bf"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784564 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784839 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784849 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4e91f9c-4d1e-4765-b609-32b5531066bf-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784857 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784865 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzp89\" (UniqueName: \"kubernetes.io/projected/f4e91f9c-4d1e-4765-b609-32b5531066bf-kube-api-access-wzp89\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784874 4858 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f4e91f9c-4d1e-4765-b609-32b5531066bf-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.784882 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e91f9c-4d1e-4765-b609-32b5531066bf-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.795764 4858 scope.go:117] "RemoveContainer" containerID="2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568" Dec 05 14:18:06 crc kubenswrapper[4858]: E1205 14:18:06.797287 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568\": container with ID starting with 2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568 not found: ID does not exist" containerID="2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.797326 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568"} err="failed to get container status \"2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568\": rpc error: code = NotFound desc = could not find container \"2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568\": container with ID starting with 2c3bd84974dc44fc384954562d970fe20aa521f1aaebc65a1a2ebd50934c8568 not found: ID does not exist" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.797372 4858 scope.go:117] "RemoveContainer" containerID="01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3" Dec 05 14:18:06 crc kubenswrapper[4858]: E1205 14:18:06.797662 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3\": container with ID starting with 01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3 not found: ID does not exist" containerID="01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.797688 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3"} err="failed to get container status \"01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3\": rpc error: code = NotFound desc = could not find container \"01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3\": container with ID starting with 01d087542ea416d21ad4f13256c21b3cdcd3a942da7a28e54d13814b7bea6ac3 not found: ID does not exist" Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.849852 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66fd8d549b-n87dk"] Dec 05 14:18:06 crc kubenswrapper[4858]: I1205 14:18:06.857675 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66fd8d549b-n87dk"] Dec 05 14:18:07 crc kubenswrapper[4858]: I1205 14:18:07.910643 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" path="/var/lib/kubelet/pods/f4e91f9c-4d1e-4765-b609-32b5531066bf/volumes" Dec 05 14:18:09 crc kubenswrapper[4858]: I1205 14:18:09.550531 4858 generic.go:334] "Generic (PLEG): container finished" podID="eab50221-12a4-4a60-910e-d020c85a5e7a" containerID="9b82f2f0117c9462f2939ab45183c7a185d7bec6b48b4c77704b23e6abd9c29b" exitCode=0 Dec 05 14:18:09 crc kubenswrapper[4858]: I1205 14:18:09.550644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" event={"ID":"eab50221-12a4-4a60-910e-d020c85a5e7a","Type":"ContainerDied","Data":"9b82f2f0117c9462f2939ab45183c7a185d7bec6b48b4c77704b23e6abd9c29b"} Dec 05 14:18:10 crc kubenswrapper[4858]: I1205 14:18:10.926198 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.104149 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-scripts\") pod \"eab50221-12a4-4a60-910e-d020c85a5e7a\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.104558 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle\") pod \"eab50221-12a4-4a60-910e-d020c85a5e7a\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.104588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sw9x\" (UniqueName: \"kubernetes.io/projected/eab50221-12a4-4a60-910e-d020c85a5e7a-kube-api-access-4sw9x\") pod \"eab50221-12a4-4a60-910e-d020c85a5e7a\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.104710 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-config-data\") pod \"eab50221-12a4-4a60-910e-d020c85a5e7a\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.110626 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-scripts" (OuterVolumeSpecName: "scripts") pod "eab50221-12a4-4a60-910e-d020c85a5e7a" (UID: "eab50221-12a4-4a60-910e-d020c85a5e7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.124098 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eab50221-12a4-4a60-910e-d020c85a5e7a-kube-api-access-4sw9x" (OuterVolumeSpecName: "kube-api-access-4sw9x") pod "eab50221-12a4-4a60-910e-d020c85a5e7a" (UID: "eab50221-12a4-4a60-910e-d020c85a5e7a"). InnerVolumeSpecName "kube-api-access-4sw9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:11 crc kubenswrapper[4858]: E1205 14:18:11.130660 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle podName:eab50221-12a4-4a60-910e-d020c85a5e7a nodeName:}" failed. No retries permitted until 2025-12-05 14:18:11.630631993 +0000 UTC m=+1300.178230132 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle") pod "eab50221-12a4-4a60-910e-d020c85a5e7a" (UID: "eab50221-12a4-4a60-910e-d020c85a5e7a") : error deleting /var/lib/kubelet/pods/eab50221-12a4-4a60-910e-d020c85a5e7a/volume-subpaths: remove /var/lib/kubelet/pods/eab50221-12a4-4a60-910e-d020c85a5e7a/volume-subpaths: no such file or directory Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.135961 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-config-data" (OuterVolumeSpecName: "config-data") pod "eab50221-12a4-4a60-910e-d020c85a5e7a" (UID: "eab50221-12a4-4a60-910e-d020c85a5e7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.206508 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.206536 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sw9x\" (UniqueName: \"kubernetes.io/projected/eab50221-12a4-4a60-910e-d020c85a5e7a-kube-api-access-4sw9x\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.206546 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.568661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" event={"ID":"eab50221-12a4-4a60-910e-d020c85a5e7a","Type":"ContainerDied","Data":"d115d88c6e35ee4cc70a99a4b938aa246e96f243afd4561ea240ec87b1d772ce"} Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.568999 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d115d88c6e35ee4cc70a99a4b938aa246e96f243afd4561ea240ec87b1d772ce" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.568718 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bmsdb" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.655899 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 14:18:11 crc kubenswrapper[4858]: E1205 14:18:11.656257 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eab50221-12a4-4a60-910e-d020c85a5e7a" containerName="nova-cell0-conductor-db-sync" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656274 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="eab50221-12a4-4a60-910e-d020c85a5e7a" containerName="nova-cell0-conductor-db-sync" Dec 05 14:18:11 crc kubenswrapper[4858]: E1205 14:18:11.656288 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656295 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" Dec 05 14:18:11 crc kubenswrapper[4858]: E1205 14:18:11.656311 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon-log" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656317 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon-log" Dec 05 14:18:11 crc kubenswrapper[4858]: E1205 14:18:11.656331 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656336 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656527 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656537 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="eab50221-12a4-4a60-910e-d020c85a5e7a" containerName="nova-cell0-conductor-db-sync" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.656554 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon-log" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.657097 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.670031 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.713867 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle\") pod \"eab50221-12a4-4a60-910e-d020c85a5e7a\" (UID: \"eab50221-12a4-4a60-910e-d020c85a5e7a\") " Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.731564 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eab50221-12a4-4a60-910e-d020c85a5e7a" (UID: "eab50221-12a4-4a60-910e-d020c85a5e7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.815972 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.816271 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.816393 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklgv\" (UniqueName: \"kubernetes.io/projected/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-kube-api-access-lklgv\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.816484 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eab50221-12a4-4a60-910e-d020c85a5e7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.918599 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.920270 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lklgv\" (UniqueName: \"kubernetes.io/projected/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-kube-api-access-lklgv\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.920604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.924768 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.927310 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.936342 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lklgv\" (UniqueName: \"kubernetes.io/projected/f21026e0-ecd8-4c9e-b21d-5e911ae0c53f-kube-api-access-lklgv\") pod \"nova-cell0-conductor-0\" (UID: \"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f\") " pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:11 crc kubenswrapper[4858]: I1205 14:18:11.973417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:12 crc kubenswrapper[4858]: I1205 14:18:12.467507 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 14:18:12 crc kubenswrapper[4858]: W1205 14:18:12.472768 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf21026e0_ecd8_4c9e_b21d_5e911ae0c53f.slice/crio-ab253d04aaab3c3c34095355f5e1d575419196375aa0bd282b24b9fa81286e11 WatchSource:0}: Error finding container ab253d04aaab3c3c34095355f5e1d575419196375aa0bd282b24b9fa81286e11: Status 404 returned error can't find the container with id ab253d04aaab3c3c34095355f5e1d575419196375aa0bd282b24b9fa81286e11 Dec 05 14:18:12 crc kubenswrapper[4858]: I1205 14:18:12.577949 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f","Type":"ContainerStarted","Data":"ab253d04aaab3c3c34095355f5e1d575419196375aa0bd282b24b9fa81286e11"} Dec 05 14:18:13 crc kubenswrapper[4858]: I1205 14:18:13.588030 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f21026e0-ecd8-4c9e-b21d-5e911ae0c53f","Type":"ContainerStarted","Data":"fd1580c480174f8a63d9ba3fcbf0bbfbce2314c0732f8987372e342430320fdf"} Dec 05 14:18:13 crc kubenswrapper[4858]: I1205 14:18:13.588322 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:13 crc kubenswrapper[4858]: I1205 14:18:13.605718 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.605690553 podStartE2EDuration="2.605690553s" podCreationTimestamp="2025-12-05 14:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:13.601974353 +0000 UTC m=+1302.149572512" watchObservedRunningTime="2025-12-05 14:18:13.605690553 +0000 UTC m=+1302.153288712" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.014656 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.476983 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7bj86"] Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.477538 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e91f9c-4d1e-4765-b609-32b5531066bf" containerName="horizon" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.478138 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.482125 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.495205 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7bj86"] Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.495540 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.626533 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-config-data\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.626588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.626647 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-scripts\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.626811 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ccq\" (UniqueName: \"kubernetes.io/projected/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-kube-api-access-88ccq\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.727894 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88ccq\" (UniqueName: \"kubernetes.io/projected/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-kube-api-access-88ccq\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.727981 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-config-data\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.728001 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.728034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-scripts\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.733962 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-config-data\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.747647 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-scripts\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.749495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.779730 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88ccq\" (UniqueName: \"kubernetes.io/projected/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-kube-api-access-88ccq\") pod \"nova-cell0-cell-mapping-7bj86\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.800692 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.819399 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.824636 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.839868 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.868382 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.927935 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.929098 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.932211 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.933446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cf938fe-5eb1-422a-8afb-96ee30e886e4-logs\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.933478 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bsmn\" (UniqueName: \"kubernetes.io/projected/5cf938fe-5eb1-422a-8afb-96ee30e886e4-kube-api-access-8bsmn\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.933545 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.933602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-config-data\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:22 crc kubenswrapper[4858]: I1205 14:18:22.945915 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.002922 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.023395 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.036679 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.036873 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbxx6\" (UniqueName: \"kubernetes.io/projected/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-kube-api-access-tbxx6\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.036907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-config-data\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.036934 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-config-data\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.036954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.037034 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cf938fe-5eb1-422a-8afb-96ee30e886e4-logs\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.037050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bsmn\" (UniqueName: \"kubernetes.io/projected/5cf938fe-5eb1-422a-8afb-96ee30e886e4-kube-api-access-8bsmn\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.039202 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.039752 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cf938fe-5eb1-422a-8afb-96ee30e886e4-logs\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.054783 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.059801 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-config-data\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.081314 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.095399 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bsmn\" (UniqueName: \"kubernetes.io/projected/5cf938fe-5eb1-422a-8afb-96ee30e886e4-kube-api-access-8bsmn\") pod \"nova-api-0\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.145978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.146057 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbxx6\" (UniqueName: \"kubernetes.io/projected/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-kube-api-access-tbxx6\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.146080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-config-data\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.146105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-config-data\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.146141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.146170 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7cw\" (UniqueName: \"kubernetes.io/projected/be58e532-6d99-4983-a5bc-f0eeabf75449-kube-api-access-md7cw\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.146189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be58e532-6d99-4983-a5bc-f0eeabf75449-logs\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.157481 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-config-data\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.187436 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.226479 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbxx6\" (UniqueName: \"kubernetes.io/projected/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-kube-api-access-tbxx6\") pod \"nova-scheduler-0\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.228732 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.230101 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.251853 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7cw\" (UniqueName: \"kubernetes.io/projected/be58e532-6d99-4983-a5bc-f0eeabf75449-kube-api-access-md7cw\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.251905 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be58e532-6d99-4983-a5bc-f0eeabf75449-logs\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.252002 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.252041 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-config-data\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.252963 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.257092 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.258886 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.261478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-config-data\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.261702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be58e532-6d99-4983-a5bc-f0eeabf75449-logs\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.282276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7cw\" (UniqueName: \"kubernetes.io/projected/be58e532-6d99-4983-a5bc-f0eeabf75449-kube-api-access-md7cw\") pod \"nova-metadata-0\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.298119 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.327495 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.329584 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbf48cbcc-jszgr"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.347174 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbf48cbcc-jszgr"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.347272 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.353560 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4s5x\" (UniqueName: \"kubernetes.io/projected/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-kube-api-access-h4s5x\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.353842 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.353968 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.455980 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z49n\" (UniqueName: \"kubernetes.io/projected/a6049824-ca90-4452-988c-19c7fa7117f9-kube-api-access-6z49n\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456370 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-svc\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456393 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4s5x\" (UniqueName: \"kubernetes.io/projected/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-kube-api-access-h4s5x\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456437 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-config\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456486 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456529 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.456555 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.460144 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.466130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.468883 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.481330 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4s5x\" (UniqueName: \"kubernetes.io/projected/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-kube-api-access-h4s5x\") pod \"nova-cell1-novncproxy-0\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.561652 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-svc\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.561744 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.561782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-config\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.561805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.561858 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.562712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.564808 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-config\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.566293 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.566784 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.570485 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-svc\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.571178 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z49n\" (UniqueName: \"kubernetes.io/projected/a6049824-ca90-4452-988c-19c7fa7117f9-kube-api-access-6z49n\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.584404 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.591804 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z49n\" (UniqueName: \"kubernetes.io/projected/a6049824-ca90-4452-988c-19c7fa7117f9-kube-api-access-6z49n\") pod \"dnsmasq-dns-5fbf48cbcc-jszgr\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.709395 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.821908 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7bj86"] Dec 05 14:18:23 crc kubenswrapper[4858]: I1205 14:18:23.924710 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.217747 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.358091 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46pzh"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.359403 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.370041 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.371011 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.387332 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.410579 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46pzh"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.433874 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.434357 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-config-data\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.434705 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4wlb\" (UniqueName: \"kubernetes.io/projected/1bb393c0-f903-4e17-82dd-84392e4231aa-kube-api-access-b4wlb\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.434843 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-scripts\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.536794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4wlb\" (UniqueName: \"kubernetes.io/projected/1bb393c0-f903-4e17-82dd-84392e4231aa-kube-api-access-b4wlb\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.536845 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-scripts\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.536909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.536968 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-config-data\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.547598 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.548432 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-scripts\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.548444 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-config-data\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.574523 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4wlb\" (UniqueName: \"kubernetes.io/projected/1bb393c0-f903-4e17-82dd-84392e4231aa-kube-api-access-b4wlb\") pod \"nova-cell1-conductor-db-sync-46pzh\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.606847 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.702300 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.778593 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbf48cbcc-jszgr"] Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.852852 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c7d9d60-f3ae-405d-bac3-ef1f8323595b","Type":"ContainerStarted","Data":"cfe9abba50edff28f5ad4a38d1245020d9286aeca5ae256542f46e3a58ce7bbc"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.856546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" event={"ID":"a6049824-ca90-4452-988c-19c7fa7117f9","Type":"ContainerStarted","Data":"a00221adc11bea83fd242b60f5735d5f45a84746c7d86d303aaba1a79988dc18"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.872370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be58e532-6d99-4983-a5bc-f0eeabf75449","Type":"ContainerStarted","Data":"465ff796a6d9d5fbd471f1c7b211af68e9e7af8c82691a52c7520d88bddf22b8"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.881583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7bj86" event={"ID":"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7","Type":"ContainerStarted","Data":"f6d9b43f5dd92889cc375b17ea5654a67f049c39e5a3cd348ad7a53f602d2850"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.881625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7bj86" event={"ID":"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7","Type":"ContainerStarted","Data":"d5b4737202265989470977d79179a6dcef9cdf7d3b21e58519dd5daa8c779943"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.888850 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01","Type":"ContainerStarted","Data":"097e4549cda5009ef9fe1050d77e9641a1e7fe19fbfa5e75784c1f6d693f7e2f"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.896882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cf938fe-5eb1-422a-8afb-96ee30e886e4","Type":"ContainerStarted","Data":"509126a1b71352dccc8b7179c4d4d8a6ae27af2172b68cb64d3c4c5714a5acb3"} Dec 05 14:18:24 crc kubenswrapper[4858]: I1205 14:18:24.908986 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7bj86" podStartSLOduration=2.908961773 podStartE2EDuration="2.908961773s" podCreationTimestamp="2025-12-05 14:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:24.896975681 +0000 UTC m=+1313.444573820" watchObservedRunningTime="2025-12-05 14:18:24.908961773 +0000 UTC m=+1313.456559912" Dec 05 14:18:25 crc kubenswrapper[4858]: I1205 14:18:25.438920 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46pzh"] Dec 05 14:18:25 crc kubenswrapper[4858]: W1205 14:18:25.446480 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bb393c0_f903_4e17_82dd_84392e4231aa.slice/crio-6bbe9060ee516adee5a2eb5d50c788f938d90559ff35e38ae278b85cafcc8616 WatchSource:0}: Error finding container 6bbe9060ee516adee5a2eb5d50c788f938d90559ff35e38ae278b85cafcc8616: Status 404 returned error can't find the container with id 6bbe9060ee516adee5a2eb5d50c788f938d90559ff35e38ae278b85cafcc8616 Dec 05 14:18:25 crc kubenswrapper[4858]: I1205 14:18:25.925061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46pzh" event={"ID":"1bb393c0-f903-4e17-82dd-84392e4231aa","Type":"ContainerStarted","Data":"df796af70d02d561de7faa3ab20457fdeb0deb36e97d086dc241cc025289fc4c"} Dec 05 14:18:25 crc kubenswrapper[4858]: I1205 14:18:25.925625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46pzh" event={"ID":"1bb393c0-f903-4e17-82dd-84392e4231aa","Type":"ContainerStarted","Data":"6bbe9060ee516adee5a2eb5d50c788f938d90559ff35e38ae278b85cafcc8616"} Dec 05 14:18:25 crc kubenswrapper[4858]: I1205 14:18:25.932846 4858 generic.go:334] "Generic (PLEG): container finished" podID="a6049824-ca90-4452-988c-19c7fa7117f9" containerID="ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c" exitCode=0 Dec 05 14:18:25 crc kubenswrapper[4858]: I1205 14:18:25.934511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" event={"ID":"a6049824-ca90-4452-988c-19c7fa7117f9","Type":"ContainerDied","Data":"ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c"} Dec 05 14:18:25 crc kubenswrapper[4858]: I1205 14:18:25.951356 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-46pzh" podStartSLOduration=1.951335481 podStartE2EDuration="1.951335481s" podCreationTimestamp="2025-12-05 14:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:25.94499797 +0000 UTC m=+1314.492596109" watchObservedRunningTime="2025-12-05 14:18:25.951335481 +0000 UTC m=+1314.498933620" Dec 05 14:18:27 crc kubenswrapper[4858]: I1205 14:18:27.359182 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:18:27 crc kubenswrapper[4858]: I1205 14:18:27.372259 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.973670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cf938fe-5eb1-422a-8afb-96ee30e886e4","Type":"ContainerStarted","Data":"9de29fd33a035d9150945cdd410d884821c68eed63d12e33b2473d84e84b2f09"} Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.979488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c7d9d60-f3ae-405d-bac3-ef1f8323595b","Type":"ContainerStarted","Data":"5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511"} Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.985719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" event={"ID":"a6049824-ca90-4452-988c-19c7fa7117f9","Type":"ContainerStarted","Data":"16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181"} Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.986232 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.989670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be58e532-6d99-4983-a5bc-f0eeabf75449","Type":"ContainerStarted","Data":"79a135fe0dfce42d5ef7fa926c4e7b1c45ceb5f7b7c60040950ed8ace5183240"} Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.989761 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-log" containerID="cri-o://79a135fe0dfce42d5ef7fa926c4e7b1c45ceb5f7b7c60040950ed8ace5183240" gracePeriod=30 Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.989863 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-metadata" containerID="cri-o://9806652011cbb22cd69113b15ec6979dc655a3836a4ea1879725fb3fcd2dee5a" gracePeriod=30 Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.993367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01","Type":"ContainerStarted","Data":"f61829d7f9dfcbb3a4fdb6930f130fff9260df20125133b0154454d503a3030f"} Dec 05 14:18:28 crc kubenswrapper[4858]: I1205 14:18:28.993449 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f61829d7f9dfcbb3a4fdb6930f130fff9260df20125133b0154454d503a3030f" gracePeriod=30 Dec 05 14:18:29 crc kubenswrapper[4858]: I1205 14:18:29.001279 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.933136935 podStartE2EDuration="7.001264366s" podCreationTimestamp="2025-12-05 14:18:22 +0000 UTC" firstStartedPulling="2025-12-05 14:18:24.230349134 +0000 UTC m=+1312.777947273" lastFinishedPulling="2025-12-05 14:18:28.298476555 +0000 UTC m=+1316.846074704" observedRunningTime="2025-12-05 14:18:28.994997117 +0000 UTC m=+1317.542595256" watchObservedRunningTime="2025-12-05 14:18:29.001264366 +0000 UTC m=+1317.548862505" Dec 05 14:18:29 crc kubenswrapper[4858]: I1205 14:18:29.027260 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.131042136 podStartE2EDuration="7.027243546s" podCreationTimestamp="2025-12-05 14:18:22 +0000 UTC" firstStartedPulling="2025-12-05 14:18:24.402537852 +0000 UTC m=+1312.950135991" lastFinishedPulling="2025-12-05 14:18:28.298739262 +0000 UTC m=+1316.846337401" observedRunningTime="2025-12-05 14:18:29.016298701 +0000 UTC m=+1317.563896840" watchObservedRunningTime="2025-12-05 14:18:29.027243546 +0000 UTC m=+1317.574841685" Dec 05 14:18:29 crc kubenswrapper[4858]: I1205 14:18:29.074725 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" podStartSLOduration=6.074708814 podStartE2EDuration="6.074708814s" podCreationTimestamp="2025-12-05 14:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:29.050967965 +0000 UTC m=+1317.598566114" watchObservedRunningTime="2025-12-05 14:18:29.074708814 +0000 UTC m=+1317.622306953" Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.003315 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cf938fe-5eb1-422a-8afb-96ee30e886e4","Type":"ContainerStarted","Data":"be017711ff0414b862c2da31bc05c5e0466794196bccca386624eaef5d76c888"} Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.006914 4858 generic.go:334] "Generic (PLEG): container finished" podID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerID="79a135fe0dfce42d5ef7fa926c4e7b1c45ceb5f7b7c60040950ed8ace5183240" exitCode=143 Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.007070 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be58e532-6d99-4983-a5bc-f0eeabf75449","Type":"ContainerDied","Data":"79a135fe0dfce42d5ef7fa926c4e7b1c45ceb5f7b7c60040950ed8ace5183240"} Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.007157 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be58e532-6d99-4983-a5bc-f0eeabf75449","Type":"ContainerStarted","Data":"9806652011cbb22cd69113b15ec6979dc655a3836a4ea1879725fb3fcd2dee5a"} Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.067631 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.755487736 podStartE2EDuration="8.06760768s" podCreationTimestamp="2025-12-05 14:18:22 +0000 UTC" firstStartedPulling="2025-12-05 14:18:23.988376066 +0000 UTC m=+1312.535974205" lastFinishedPulling="2025-12-05 14:18:28.30049601 +0000 UTC m=+1316.848094149" observedRunningTime="2025-12-05 14:18:30.022016911 +0000 UTC m=+1318.569615050" watchObservedRunningTime="2025-12-05 14:18:30.06760768 +0000 UTC m=+1318.615205819" Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.074110 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.433001166 podStartE2EDuration="7.074088844s" podCreationTimestamp="2025-12-05 14:18:23 +0000 UTC" firstStartedPulling="2025-12-05 14:18:24.659864074 +0000 UTC m=+1313.207462213" lastFinishedPulling="2025-12-05 14:18:28.300951752 +0000 UTC m=+1316.848549891" observedRunningTime="2025-12-05 14:18:29.074290603 +0000 UTC m=+1317.621888732" watchObservedRunningTime="2025-12-05 14:18:30.074088844 +0000 UTC m=+1318.621686983" Dec 05 14:18:30 crc kubenswrapper[4858]: I1205 14:18:30.892309 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.299805 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.300329 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.328833 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.328885 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.364015 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.469748 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.469804 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.585639 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.715992 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.783334 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c795fd55-4cmqs"] Dec 05 14:18:33 crc kubenswrapper[4858]: I1205 14:18:33.783702 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" containerName="dnsmasq-dns" containerID="cri-o://79d1dce0c31da28553f959cc24caa6e3ee6e664679454989e80cd72f3b43fa6d" gracePeriod=10 Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.069386 4858 generic.go:334] "Generic (PLEG): container finished" podID="2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" containerID="f6d9b43f5dd92889cc375b17ea5654a67f049c39e5a3cd348ad7a53f602d2850" exitCode=0 Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.069584 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7bj86" event={"ID":"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7","Type":"ContainerDied","Data":"f6d9b43f5dd92889cc375b17ea5654a67f049c39e5a3cd348ad7a53f602d2850"} Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.074197 4858 generic.go:334] "Generic (PLEG): container finished" podID="0051a952-b753-48c8-af95-52ca1cd543b8" containerID="79d1dce0c31da28553f959cc24caa6e3ee6e664679454989e80cd72f3b43fa6d" exitCode=0 Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.074245 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" event={"ID":"0051a952-b753-48c8-af95-52ca1cd543b8","Type":"ContainerDied","Data":"79d1dce0c31da28553f959cc24caa6e3ee6e664679454989e80cd72f3b43fa6d"} Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.145537 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.384017 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.384351 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.460328 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.601102 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-nb\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.601173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-sb\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.601264 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtwhg\" (UniqueName: \"kubernetes.io/projected/0051a952-b753-48c8-af95-52ca1cd543b8-kube-api-access-jtwhg\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.601302 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.601335 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-config\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.601411 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-svc\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.617890 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0051a952-b753-48c8-af95-52ca1cd543b8-kube-api-access-jtwhg" (OuterVolumeSpecName: "kube-api-access-jtwhg") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "kube-api-access-jtwhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.694441 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.703619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.703767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0\") pod \"0051a952-b753-48c8-af95-52ca1cd543b8\" (UID: \"0051a952-b753-48c8-af95-52ca1cd543b8\") " Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.704408 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtwhg\" (UniqueName: \"kubernetes.io/projected/0051a952-b753-48c8-af95-52ca1cd543b8-kube-api-access-jtwhg\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.704429 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:34 crc kubenswrapper[4858]: W1205 14:18:34.704594 4858 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0051a952-b753-48c8-af95-52ca1cd543b8/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.704690 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.722176 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-config" (OuterVolumeSpecName: "config") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.723120 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.726636 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0051a952-b753-48c8-af95-52ca1cd543b8" (UID: "0051a952-b753-48c8-af95-52ca1cd543b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.805987 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.806028 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.806038 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:34 crc kubenswrapper[4858]: I1205 14:18:34.806048 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0051a952-b753-48c8-af95-52ca1cd543b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.106544 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.106673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c795fd55-4cmqs" event={"ID":"0051a952-b753-48c8-af95-52ca1cd543b8","Type":"ContainerDied","Data":"68aa07234f1b36f7595a7d5c562e0428ed95b7fe99f5355d9d93964763fb6600"} Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.106746 4858 scope.go:117] "RemoveContainer" containerID="79d1dce0c31da28553f959cc24caa6e3ee6e664679454989e80cd72f3b43fa6d" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.177762 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c795fd55-4cmqs"] Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.179498 4858 scope.go:117] "RemoveContainer" containerID="536f7255db5c1df4f6243f7c48543bd8780cf0a52e2fb4deec18ec21919eae07" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.189770 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c795fd55-4cmqs"] Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.522588 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.523093 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" containerName="kube-state-metrics" containerID="cri-o://e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a" gracePeriod=30 Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.796893 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.910245 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" path="/var/lib/kubelet/pods/0051a952-b753-48c8-af95-52ca1cd543b8/volumes" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.953978 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-combined-ca-bundle\") pod \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.954029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88ccq\" (UniqueName: \"kubernetes.io/projected/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-kube-api-access-88ccq\") pod \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.954114 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-config-data\") pod \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.954277 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-scripts\") pod \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\" (UID: \"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7\") " Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.965940 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-scripts" (OuterVolumeSpecName: "scripts") pod "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" (UID: "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:35 crc kubenswrapper[4858]: I1205 14:18:35.979305 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-kube-api-access-88ccq" (OuterVolumeSpecName: "kube-api-access-88ccq") pod "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" (UID: "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7"). InnerVolumeSpecName "kube-api-access-88ccq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.016958 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-config-data" (OuterVolumeSpecName: "config-data") pod "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" (UID: "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.018873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" (UID: "2c0b9622-7ee2-433a-a43d-a2ea667bd7f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.057678 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.057718 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.057732 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88ccq\" (UniqueName: \"kubernetes.io/projected/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-kube-api-access-88ccq\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.057743 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.114778 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.116381 4858 generic.go:334] "Generic (PLEG): container finished" podID="805d1f07-ba33-4534-8fe0-3697049c2eb6" containerID="e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a" exitCode=2 Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.116432 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"805d1f07-ba33-4534-8fe0-3697049c2eb6","Type":"ContainerDied","Data":"e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a"} Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.116456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"805d1f07-ba33-4534-8fe0-3697049c2eb6","Type":"ContainerDied","Data":"f22b49108b18b8ed83af2520740d7e26067700d2d0ee7f48ad64c8694993cf62"} Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.116471 4858 scope.go:117] "RemoveContainer" containerID="e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.118234 4858 generic.go:334] "Generic (PLEG): container finished" podID="1bb393c0-f903-4e17-82dd-84392e4231aa" containerID="df796af70d02d561de7faa3ab20457fdeb0deb36e97d086dc241cc025289fc4c" exitCode=0 Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.118269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46pzh" event={"ID":"1bb393c0-f903-4e17-82dd-84392e4231aa","Type":"ContainerDied","Data":"df796af70d02d561de7faa3ab20457fdeb0deb36e97d086dc241cc025289fc4c"} Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.119959 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7bj86" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.119977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7bj86" event={"ID":"2c0b9622-7ee2-433a-a43d-a2ea667bd7f7","Type":"ContainerDied","Data":"d5b4737202265989470977d79179a6dcef9cdf7d3b21e58519dd5daa8c779943"} Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.120007 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b4737202265989470977d79179a6dcef9cdf7d3b21e58519dd5daa8c779943" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.141902 4858 scope.go:117] "RemoveContainer" containerID="e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a" Dec 05 14:18:36 crc kubenswrapper[4858]: E1205 14:18:36.142743 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a\": container with ID starting with e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a not found: ID does not exist" containerID="e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.142784 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a"} err="failed to get container status \"e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a\": rpc error: code = NotFound desc = could not find container \"e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a\": container with ID starting with e3f6bf903a636481f95bfa20d606bacfe52288049ec810644ba07e8b5090694a not found: ID does not exist" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.159256 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45ht\" (UniqueName: \"kubernetes.io/projected/805d1f07-ba33-4534-8fe0-3697049c2eb6-kube-api-access-l45ht\") pod \"805d1f07-ba33-4534-8fe0-3697049c2eb6\" (UID: \"805d1f07-ba33-4534-8fe0-3697049c2eb6\") " Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.175799 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805d1f07-ba33-4534-8fe0-3697049c2eb6-kube-api-access-l45ht" (OuterVolumeSpecName: "kube-api-access-l45ht") pod "805d1f07-ba33-4534-8fe0-3697049c2eb6" (UID: "805d1f07-ba33-4534-8fe0-3697049c2eb6"). InnerVolumeSpecName "kube-api-access-l45ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.262793 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45ht\" (UniqueName: \"kubernetes.io/projected/805d1f07-ba33-4534-8fe0-3697049c2eb6-kube-api-access-l45ht\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.438932 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.439208 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-log" containerID="cri-o://9de29fd33a035d9150945cdd410d884821c68eed63d12e33b2473d84e84b2f09" gracePeriod=30 Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.439289 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-api" containerID="cri-o://be017711ff0414b862c2da31bc05c5e0466794196bccca386624eaef5d76c888" gracePeriod=30 Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.454273 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:36 crc kubenswrapper[4858]: I1205 14:18:36.454482 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" containerName="nova-scheduler-scheduler" containerID="cri-o://5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511" gracePeriod=30 Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.137375 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.141398 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerID="9de29fd33a035d9150945cdd410d884821c68eed63d12e33b2473d84e84b2f09" exitCode=143 Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.141488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cf938fe-5eb1-422a-8afb-96ee30e886e4","Type":"ContainerDied","Data":"9de29fd33a035d9150945cdd410d884821c68eed63d12e33b2473d84e84b2f09"} Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.177660 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.194663 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.203560 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:18:37 crc kubenswrapper[4858]: E1205 14:18:37.203988 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" containerName="kube-state-metrics" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.204007 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" containerName="kube-state-metrics" Dec 05 14:18:37 crc kubenswrapper[4858]: E1205 14:18:37.204047 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" containerName="init" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.204053 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" containerName="init" Dec 05 14:18:37 crc kubenswrapper[4858]: E1205 14:18:37.204065 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" containerName="dnsmasq-dns" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.204072 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" containerName="dnsmasq-dns" Dec 05 14:18:37 crc kubenswrapper[4858]: E1205 14:18:37.204083 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" containerName="nova-manage" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.204088 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" containerName="nova-manage" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.207481 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0051a952-b753-48c8-af95-52ca1cd543b8" containerName="dnsmasq-dns" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.207510 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" containerName="kube-state-metrics" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.207566 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" containerName="nova-manage" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.208332 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.211410 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.213995 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.229108 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.282602 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.282768 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.282872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.282992 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcr2m\" (UniqueName: \"kubernetes.io/projected/34c521aa-4339-4571-9168-f2939e083ea5-kube-api-access-zcr2m\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.385129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.385202 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.385228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.385276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcr2m\" (UniqueName: \"kubernetes.io/projected/34c521aa-4339-4571-9168-f2939e083ea5-kube-api-access-zcr2m\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.391470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.393939 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.403149 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/34c521aa-4339-4571-9168-f2939e083ea5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.403885 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcr2m\" (UniqueName: \"kubernetes.io/projected/34c521aa-4339-4571-9168-f2939e083ea5-kube-api-access-zcr2m\") pod \"kube-state-metrics-0\" (UID: \"34c521aa-4339-4571-9168-f2939e083ea5\") " pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.554257 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.558612 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.587677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-combined-ca-bundle\") pod \"1bb393c0-f903-4e17-82dd-84392e4231aa\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.587871 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-config-data\") pod \"1bb393c0-f903-4e17-82dd-84392e4231aa\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.587970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-scripts\") pod \"1bb393c0-f903-4e17-82dd-84392e4231aa\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.588001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4wlb\" (UniqueName: \"kubernetes.io/projected/1bb393c0-f903-4e17-82dd-84392e4231aa-kube-api-access-b4wlb\") pod \"1bb393c0-f903-4e17-82dd-84392e4231aa\" (UID: \"1bb393c0-f903-4e17-82dd-84392e4231aa\") " Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.596117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb393c0-f903-4e17-82dd-84392e4231aa-kube-api-access-b4wlb" (OuterVolumeSpecName: "kube-api-access-b4wlb") pod "1bb393c0-f903-4e17-82dd-84392e4231aa" (UID: "1bb393c0-f903-4e17-82dd-84392e4231aa"). InnerVolumeSpecName "kube-api-access-b4wlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.614127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-scripts" (OuterVolumeSpecName: "scripts") pod "1bb393c0-f903-4e17-82dd-84392e4231aa" (UID: "1bb393c0-f903-4e17-82dd-84392e4231aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.622984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-config-data" (OuterVolumeSpecName: "config-data") pod "1bb393c0-f903-4e17-82dd-84392e4231aa" (UID: "1bb393c0-f903-4e17-82dd-84392e4231aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.653038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bb393c0-f903-4e17-82dd-84392e4231aa" (UID: "1bb393c0-f903-4e17-82dd-84392e4231aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.690101 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.690133 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.690142 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb393c0-f903-4e17-82dd-84392e4231aa-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.690150 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4wlb\" (UniqueName: \"kubernetes.io/projected/1bb393c0-f903-4e17-82dd-84392e4231aa-kube-api-access-b4wlb\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:37 crc kubenswrapper[4858]: I1205 14:18:37.911303 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805d1f07-ba33-4534-8fe0-3697049c2eb6" path="/var/lib/kubelet/pods/805d1f07-ba33-4534-8fe0-3697049c2eb6/volumes" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.031069 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.161197 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"34c521aa-4339-4571-9168-f2939e083ea5","Type":"ContainerStarted","Data":"00763c7c203c4c6d003fac23c6914819d46b854982bf93bee9642a405e2bb02c"} Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.162739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46pzh" event={"ID":"1bb393c0-f903-4e17-82dd-84392e4231aa","Type":"ContainerDied","Data":"6bbe9060ee516adee5a2eb5d50c788f938d90559ff35e38ae278b85cafcc8616"} Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.162765 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bbe9060ee516adee5a2eb5d50c788f938d90559ff35e38ae278b85cafcc8616" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.162837 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46pzh" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.273134 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 14:18:38 crc kubenswrapper[4858]: E1205 14:18:38.273533 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb393c0-f903-4e17-82dd-84392e4231aa" containerName="nova-cell1-conductor-db-sync" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.273548 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb393c0-f903-4e17-82dd-84392e4231aa" containerName="nova-cell1-conductor-db-sync" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.273716 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb393c0-f903-4e17-82dd-84392e4231aa" containerName="nova-cell1-conductor-db-sync" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.274339 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.276228 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.286587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 14:18:38 crc kubenswrapper[4858]: E1205 14:18:38.332297 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 14:18:38 crc kubenswrapper[4858]: E1205 14:18:38.333621 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 14:18:38 crc kubenswrapper[4858]: E1205 14:18:38.336106 4858 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 05 14:18:38 crc kubenswrapper[4858]: E1205 14:18:38.336173 4858 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" containerName="nova-scheduler-scheduler" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.359542 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.359881 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-central-agent" containerID="cri-o://7d960a027dcf2d2d7c755f28bcec1ac014c2d45d6db063d1492ae53264d36286" gracePeriod=30 Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.359924 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="sg-core" containerID="cri-o://dd6413a5032d14e46e23cec222f6facf2670e84ea77770b8f52b404d94438bf5" gracePeriod=30 Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.359934 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-notification-agent" containerID="cri-o://2b78e69fed336bbecf56645ac69c29aff6f4e213a0f9360a75f30a78d4877cab" gracePeriod=30 Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.359934 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="proxy-httpd" containerID="cri-o://e717e69a17a83bca67e86ff6ba4b2e2689151badca6ccbc1d867d191c2d5019d" gracePeriod=30 Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.402510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.402580 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.402749 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4b6\" (UniqueName: \"kubernetes.io/projected/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-kube-api-access-jm4b6\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.504018 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.504075 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.504155 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4b6\" (UniqueName: \"kubernetes.io/projected/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-kube-api-access-jm4b6\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.508534 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.519813 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.533095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4b6\" (UniqueName: \"kubernetes.io/projected/e7fa93a4-687f-4a03-b825-0a77eaf1d68e-kube-api-access-jm4b6\") pod \"nova-cell1-conductor-0\" (UID: \"e7fa93a4-687f-4a03-b825-0a77eaf1d68e\") " pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:38 crc kubenswrapper[4858]: I1205 14:18:38.594882 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.108164 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 14:18:39 crc kubenswrapper[4858]: W1205 14:18:39.110668 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7fa93a4_687f_4a03_b825_0a77eaf1d68e.slice/crio-7fa2e9c6fbb62faabbd21e85dc0218a7a67928ec08025a53af09c45caa40b2e5 WatchSource:0}: Error finding container 7fa2e9c6fbb62faabbd21e85dc0218a7a67928ec08025a53af09c45caa40b2e5: Status 404 returned error can't find the container with id 7fa2e9c6fbb62faabbd21e85dc0218a7a67928ec08025a53af09c45caa40b2e5 Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.174255 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e7fa93a4-687f-4a03-b825-0a77eaf1d68e","Type":"ContainerStarted","Data":"7fa2e9c6fbb62faabbd21e85dc0218a7a67928ec08025a53af09c45caa40b2e5"} Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.176429 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"34c521aa-4339-4571-9168-f2939e083ea5","Type":"ContainerStarted","Data":"1a50d448f2696b0741d9d818dc8c0a49dc99bd9e77e15ca14f381f84910a8c4a"} Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.177389 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.182325 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerID="e717e69a17a83bca67e86ff6ba4b2e2689151badca6ccbc1d867d191c2d5019d" exitCode=0 Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.182496 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerID="dd6413a5032d14e46e23cec222f6facf2670e84ea77770b8f52b404d94438bf5" exitCode=2 Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.182601 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerID="7d960a027dcf2d2d7c755f28bcec1ac014c2d45d6db063d1492ae53264d36286" exitCode=0 Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.182719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerDied","Data":"e717e69a17a83bca67e86ff6ba4b2e2689151badca6ccbc1d867d191c2d5019d"} Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.182871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerDied","Data":"dd6413a5032d14e46e23cec222f6facf2670e84ea77770b8f52b404d94438bf5"} Dec 05 14:18:39 crc kubenswrapper[4858]: I1205 14:18:39.183012 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerDied","Data":"7d960a027dcf2d2d7c755f28bcec1ac014c2d45d6db063d1492ae53264d36286"} Dec 05 14:18:40 crc kubenswrapper[4858]: I1205 14:18:40.197049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e7fa93a4-687f-4a03-b825-0a77eaf1d68e","Type":"ContainerStarted","Data":"a68cbedb2e1a2382bbfa1f8d2e78cbcca325fbff7f86121fafdca869069cb00b"} Dec 05 14:18:40 crc kubenswrapper[4858]: I1205 14:18:40.231689 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.231663954 podStartE2EDuration="2.231663954s" podCreationTimestamp="2025-12-05 14:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:40.217074611 +0000 UTC m=+1328.764672790" watchObservedRunningTime="2025-12-05 14:18:40.231663954 +0000 UTC m=+1328.779262133" Dec 05 14:18:40 crc kubenswrapper[4858]: I1205 14:18:40.234645 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.804770074 podStartE2EDuration="3.234624743s" podCreationTimestamp="2025-12-05 14:18:37 +0000 UTC" firstStartedPulling="2025-12-05 14:18:38.034013037 +0000 UTC m=+1326.581611176" lastFinishedPulling="2025-12-05 14:18:38.463867706 +0000 UTC m=+1327.011465845" observedRunningTime="2025-12-05 14:18:39.197029674 +0000 UTC m=+1327.744627823" watchObservedRunningTime="2025-12-05 14:18:40.234624743 +0000 UTC m=+1328.782222882" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.213698 4858 generic.go:334] "Generic (PLEG): container finished" podID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerID="be017711ff0414b862c2da31bc05c5e0466794196bccca386624eaef5d76c888" exitCode=0 Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.213760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cf938fe-5eb1-422a-8afb-96ee30e886e4","Type":"ContainerDied","Data":"be017711ff0414b862c2da31bc05c5e0466794196bccca386624eaef5d76c888"} Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.215650 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" containerID="5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511" exitCode=0 Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.216547 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c7d9d60-f3ae-405d-bac3-ef1f8323595b","Type":"ContainerDied","Data":"5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511"} Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.216584 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.463908 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.482177 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.576572 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-combined-ca-bundle\") pod \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.576953 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbxx6\" (UniqueName: \"kubernetes.io/projected/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-kube-api-access-tbxx6\") pod \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.577015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-config-data\") pod \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.577031 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-combined-ca-bundle\") pod \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.577170 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bsmn\" (UniqueName: \"kubernetes.io/projected/5cf938fe-5eb1-422a-8afb-96ee30e886e4-kube-api-access-8bsmn\") pod \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.577287 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cf938fe-5eb1-422a-8afb-96ee30e886e4-logs\") pod \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\" (UID: \"5cf938fe-5eb1-422a-8afb-96ee30e886e4\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.577320 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-config-data\") pod \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\" (UID: \"0c7d9d60-f3ae-405d-bac3-ef1f8323595b\") " Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.579349 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cf938fe-5eb1-422a-8afb-96ee30e886e4-logs" (OuterVolumeSpecName: "logs") pod "5cf938fe-5eb1-422a-8afb-96ee30e886e4" (UID: "5cf938fe-5eb1-422a-8afb-96ee30e886e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.582064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-kube-api-access-tbxx6" (OuterVolumeSpecName: "kube-api-access-tbxx6") pod "0c7d9d60-f3ae-405d-bac3-ef1f8323595b" (UID: "0c7d9d60-f3ae-405d-bac3-ef1f8323595b"). InnerVolumeSpecName "kube-api-access-tbxx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.582175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf938fe-5eb1-422a-8afb-96ee30e886e4-kube-api-access-8bsmn" (OuterVolumeSpecName: "kube-api-access-8bsmn") pod "5cf938fe-5eb1-422a-8afb-96ee30e886e4" (UID: "5cf938fe-5eb1-422a-8afb-96ee30e886e4"). InnerVolumeSpecName "kube-api-access-8bsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.611030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-config-data" (OuterVolumeSpecName: "config-data") pod "0c7d9d60-f3ae-405d-bac3-ef1f8323595b" (UID: "0c7d9d60-f3ae-405d-bac3-ef1f8323595b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.612864 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cf938fe-5eb1-422a-8afb-96ee30e886e4" (UID: "5cf938fe-5eb1-422a-8afb-96ee30e886e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.613899 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c7d9d60-f3ae-405d-bac3-ef1f8323595b" (UID: "0c7d9d60-f3ae-405d-bac3-ef1f8323595b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.614848 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-config-data" (OuterVolumeSpecName: "config-data") pod "5cf938fe-5eb1-422a-8afb-96ee30e886e4" (UID: "5cf938fe-5eb1-422a-8afb-96ee30e886e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679237 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679269 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf938fe-5eb1-422a-8afb-96ee30e886e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679280 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bsmn\" (UniqueName: \"kubernetes.io/projected/5cf938fe-5eb1-422a-8afb-96ee30e886e4-kube-api-access-8bsmn\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679289 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679297 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cf938fe-5eb1-422a-8afb-96ee30e886e4-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679306 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:41 crc kubenswrapper[4858]: I1205 14:18:41.679314 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbxx6\" (UniqueName: \"kubernetes.io/projected/0c7d9d60-f3ae-405d-bac3-ef1f8323595b-kube-api-access-tbxx6\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.225532 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c7d9d60-f3ae-405d-bac3-ef1f8323595b","Type":"ContainerDied","Data":"cfe9abba50edff28f5ad4a38d1245020d9286aeca5ae256542f46e3a58ce7bbc"} Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.225603 4858 scope.go:117] "RemoveContainer" containerID="5f44ddf98a2de9eda890b8712642c4bd9885b527f35cec5c53af754b7a164511" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.225781 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.230319 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.230903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cf938fe-5eb1-422a-8afb-96ee30e886e4","Type":"ContainerDied","Data":"509126a1b71352dccc8b7179c4d4d8a6ae27af2172b68cb64d3c4c5714a5acb3"} Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.259368 4858 scope.go:117] "RemoveContainer" containerID="be017711ff0414b862c2da31bc05c5e0466794196bccca386624eaef5d76c888" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.269198 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.299039 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.308111 4858 scope.go:117] "RemoveContainer" containerID="9de29fd33a035d9150945cdd410d884821c68eed63d12e33b2473d84e84b2f09" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.315787 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.328058 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.337681 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: E1205 14:18:42.338172 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-api" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.338241 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-api" Dec 05 14:18:42 crc kubenswrapper[4858]: E1205 14:18:42.338335 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-log" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.338388 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-log" Dec 05 14:18:42 crc kubenswrapper[4858]: E1205 14:18:42.338442 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" containerName="nova-scheduler-scheduler" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.338499 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" containerName="nova-scheduler-scheduler" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.338741 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-log" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.338906 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" containerName="nova-api-api" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.339001 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" containerName="nova-scheduler-scheduler" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.340117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.343870 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.346386 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.348323 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.351095 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.361590 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.389025 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.497775 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-config-data\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.497977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.498072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.498179 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-config-data\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.498259 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdxh5\" (UniqueName: \"kubernetes.io/projected/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-kube-api-access-cdxh5\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.498361 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-logs\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.498386 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2m8\" (UniqueName: \"kubernetes.io/projected/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-kube-api-access-4c2m8\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600115 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-config-data\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600188 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdxh5\" (UniqueName: \"kubernetes.io/projected/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-kube-api-access-cdxh5\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600256 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2m8\" (UniqueName: \"kubernetes.io/projected/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-kube-api-access-4c2m8\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600279 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-logs\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600348 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-config-data\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600382 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.600453 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.602068 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-logs\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.605505 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.605839 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-config-data\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.607466 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-config-data\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.608787 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.624350 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2m8\" (UniqueName: \"kubernetes.io/projected/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-kube-api-access-4c2m8\") pod \"nova-scheduler-0\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " pod="openstack/nova-scheduler-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.628892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdxh5\" (UniqueName: \"kubernetes.io/projected/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-kube-api-access-cdxh5\") pod \"nova-api-0\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.682530 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:18:42 crc kubenswrapper[4858]: I1205 14:18:42.712265 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:18:43 crc kubenswrapper[4858]: I1205 14:18:43.271935 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:18:43 crc kubenswrapper[4858]: W1205 14:18:43.276080 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b4dba8_9fca_453f_97f3_ae420cb6fdcd.slice/crio-9456d6ecf55e4dc3e53ee13b838423845f98e7506f8bae78e2a867704d0438b1 WatchSource:0}: Error finding container 9456d6ecf55e4dc3e53ee13b838423845f98e7506f8bae78e2a867704d0438b1: Status 404 returned error can't find the container with id 9456d6ecf55e4dc3e53ee13b838423845f98e7506f8bae78e2a867704d0438b1 Dec 05 14:18:43 crc kubenswrapper[4858]: I1205 14:18:43.283741 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:18:43 crc kubenswrapper[4858]: I1205 14:18:43.911253 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7d9d60-f3ae-405d-bac3-ef1f8323595b" path="/var/lib/kubelet/pods/0c7d9d60-f3ae-405d-bac3-ef1f8323595b/volumes" Dec 05 14:18:43 crc kubenswrapper[4858]: I1205 14:18:43.913003 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf938fe-5eb1-422a-8afb-96ee30e886e4" path="/var/lib/kubelet/pods/5cf938fe-5eb1-422a-8afb-96ee30e886e4/volumes" Dec 05 14:18:44 crc kubenswrapper[4858]: I1205 14:18:44.249977 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97","Type":"ContainerStarted","Data":"44e4ebfdfbc76e1a65b9d59a3b205a562cd83bd457e4550a5402a7410265db85"} Dec 05 14:18:44 crc kubenswrapper[4858]: I1205 14:18:44.250295 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97","Type":"ContainerStarted","Data":"34859beba9ece1dd46114c0906e83f960bf934aed4fa098208e3b0b1d84c1059"} Dec 05 14:18:44 crc kubenswrapper[4858]: I1205 14:18:44.252016 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd","Type":"ContainerStarted","Data":"b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428"} Dec 05 14:18:44 crc kubenswrapper[4858]: I1205 14:18:44.252061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd","Type":"ContainerStarted","Data":"2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501"} Dec 05 14:18:44 crc kubenswrapper[4858]: I1205 14:18:44.252072 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd","Type":"ContainerStarted","Data":"9456d6ecf55e4dc3e53ee13b838423845f98e7506f8bae78e2a867704d0438b1"} Dec 05 14:18:44 crc kubenswrapper[4858]: I1205 14:18:44.271771 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.27175137 podStartE2EDuration="2.27175137s" podCreationTimestamp="2025-12-05 14:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:44.264049902 +0000 UTC m=+1332.811648041" watchObservedRunningTime="2025-12-05 14:18:44.27175137 +0000 UTC m=+1332.819349509" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.268376 4858 generic.go:334] "Generic (PLEG): container finished" podID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerID="2b78e69fed336bbecf56645ac69c29aff6f4e213a0f9360a75f30a78d4877cab" exitCode=0 Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.268561 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerDied","Data":"2b78e69fed336bbecf56645ac69c29aff6f4e213a0f9360a75f30a78d4877cab"} Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.454858 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.479890 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.479872922 podStartE2EDuration="3.479872922s" podCreationTimestamp="2025-12-05 14:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:18:44.285555951 +0000 UTC m=+1332.833154090" watchObservedRunningTime="2025-12-05 14:18:45.479872922 +0000 UTC m=+1334.027471051" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.571927 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-config-data\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.571984 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-sg-core-conf-yaml\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.572045 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-combined-ca-bundle\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.572077 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-scripts\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.572899 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4lbc\" (UniqueName: \"kubernetes.io/projected/3fd12e39-94e7-4a8c-9c85-1c856b627d26-kube-api-access-b4lbc\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.572959 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-run-httpd\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.573019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-log-httpd\") pod \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\" (UID: \"3fd12e39-94e7-4a8c-9c85-1c856b627d26\") " Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.573303 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.573449 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.573767 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.573791 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3fd12e39-94e7-4a8c-9c85-1c856b627d26-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.587013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-scripts" (OuterVolumeSpecName: "scripts") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.587141 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd12e39-94e7-4a8c-9c85-1c856b627d26-kube-api-access-b4lbc" (OuterVolumeSpecName: "kube-api-access-b4lbc") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "kube-api-access-b4lbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.604159 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.669184 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.674917 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.674948 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.674958 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.674968 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4lbc\" (UniqueName: \"kubernetes.io/projected/3fd12e39-94e7-4a8c-9c85-1c856b627d26-kube-api-access-b4lbc\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.691143 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-config-data" (OuterVolumeSpecName: "config-data") pod "3fd12e39-94e7-4a8c-9c85-1c856b627d26" (UID: "3fd12e39-94e7-4a8c-9c85-1c856b627d26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:45 crc kubenswrapper[4858]: I1205 14:18:45.776945 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fd12e39-94e7-4a8c-9c85-1c856b627d26-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.279935 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.280708 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3fd12e39-94e7-4a8c-9c85-1c856b627d26","Type":"ContainerDied","Data":"3f79c46b7b2a8ce452abf6143c1019dc99dac0312df85d6170852124af06aee7"} Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.280758 4858 scope.go:117] "RemoveContainer" containerID="e717e69a17a83bca67e86ff6ba4b2e2689151badca6ccbc1d867d191c2d5019d" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.306417 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.319098 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.322486 4858 scope.go:117] "RemoveContainer" containerID="dd6413a5032d14e46e23cec222f6facf2670e84ea77770b8f52b404d94438bf5" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332043 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:46 crc kubenswrapper[4858]: E1205 14:18:46.332421 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-notification-agent" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332442 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-notification-agent" Dec 05 14:18:46 crc kubenswrapper[4858]: E1205 14:18:46.332470 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-central-agent" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332476 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-central-agent" Dec 05 14:18:46 crc kubenswrapper[4858]: E1205 14:18:46.332492 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="proxy-httpd" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332499 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="proxy-httpd" Dec 05 14:18:46 crc kubenswrapper[4858]: E1205 14:18:46.332510 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="sg-core" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332517 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="sg-core" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332681 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-notification-agent" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332699 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="proxy-httpd" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332714 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="sg-core" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.332723 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" containerName="ceilometer-central-agent" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.335068 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.337969 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.342478 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.342643 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.354251 4858 scope.go:117] "RemoveContainer" containerID="2b78e69fed336bbecf56645ac69c29aff6f4e213a0f9360a75f30a78d4877cab" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.355464 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.429759 4858 scope.go:117] "RemoveContainer" containerID="7d960a027dcf2d2d7c755f28bcec1ac014c2d45d6db063d1492ae53264d36286" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.498770 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-run-httpd\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.499595 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.499900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz7s5\" (UniqueName: \"kubernetes.io/projected/fe4e798c-be06-4822-9be5-5a3636c523c7-kube-api-access-vz7s5\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.499971 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.500001 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-scripts\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.500029 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-config-data\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.500139 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-log-httpd\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.500212 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602189 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602276 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-run-httpd\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602455 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602557 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz7s5\" (UniqueName: \"kubernetes.io/projected/fe4e798c-be06-4822-9be5-5a3636c523c7-kube-api-access-vz7s5\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602621 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602641 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-scripts\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602681 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-config-data\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602732 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-log-httpd\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.602926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-run-httpd\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.603267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-log-httpd\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.606929 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.607277 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.607395 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-scripts\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.609403 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-config-data\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.609721 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.636903 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz7s5\" (UniqueName: \"kubernetes.io/projected/fe4e798c-be06-4822-9be5-5a3636c523c7-kube-api-access-vz7s5\") pod \"ceilometer-0\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " pod="openstack/ceilometer-0" Dec 05 14:18:46 crc kubenswrapper[4858]: I1205 14:18:46.711922 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:18:47 crc kubenswrapper[4858]: I1205 14:18:47.159799 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:18:47 crc kubenswrapper[4858]: W1205 14:18:47.172521 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe4e798c_be06_4822_9be5_5a3636c523c7.slice/crio-2744f52fed133c837191daf37a10f6529540168fe1f16df4f7557d11c72bf09c WatchSource:0}: Error finding container 2744f52fed133c837191daf37a10f6529540168fe1f16df4f7557d11c72bf09c: Status 404 returned error can't find the container with id 2744f52fed133c837191daf37a10f6529540168fe1f16df4f7557d11c72bf09c Dec 05 14:18:47 crc kubenswrapper[4858]: I1205 14:18:47.290387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerStarted","Data":"2744f52fed133c837191daf37a10f6529540168fe1f16df4f7557d11c72bf09c"} Dec 05 14:18:47 crc kubenswrapper[4858]: I1205 14:18:47.563367 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 14:18:47 crc kubenswrapper[4858]: I1205 14:18:47.712907 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 14:18:47 crc kubenswrapper[4858]: I1205 14:18:47.914609 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fd12e39-94e7-4a8c-9c85-1c856b627d26" path="/var/lib/kubelet/pods/3fd12e39-94e7-4a8c-9c85-1c856b627d26/volumes" Dec 05 14:18:48 crc kubenswrapper[4858]: I1205 14:18:48.301848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerStarted","Data":"f161ffe02f1f23db59d89464ee061e202c3edd6f50412a1da98c20af98f7ed82"} Dec 05 14:18:48 crc kubenswrapper[4858]: I1205 14:18:48.301897 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerStarted","Data":"988be034491803f17366da9134c90cc7eac1d628531991868e46e74ff1d1bbea"} Dec 05 14:18:48 crc kubenswrapper[4858]: I1205 14:18:48.636413 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 14:18:49 crc kubenswrapper[4858]: I1205 14:18:49.311179 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerStarted","Data":"6ee0e3490476e07937e90fe20ab615bdc63092e35b88b23e4d9c2a03377fe5e3"} Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.380092 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerStarted","Data":"39171bb805a12e1115a6238679de5a25d82506888d0a79c50e668969367bb87a"} Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.380718 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.399870 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.515468191 podStartE2EDuration="6.399849603s" podCreationTimestamp="2025-12-05 14:18:46 +0000 UTC" firstStartedPulling="2025-12-05 14:18:47.174970102 +0000 UTC m=+1335.722568241" lastFinishedPulling="2025-12-05 14:18:51.059351514 +0000 UTC m=+1339.606949653" observedRunningTime="2025-12-05 14:18:52.399151243 +0000 UTC m=+1340.946749392" watchObservedRunningTime="2025-12-05 14:18:52.399849603 +0000 UTC m=+1340.947447742" Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.683447 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.683495 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.712490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 14:18:52 crc kubenswrapper[4858]: I1205 14:18:52.745007 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 14:18:53 crc kubenswrapper[4858]: I1205 14:18:53.715363 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 14:18:53 crc kubenswrapper[4858]: I1205 14:18:53.767019 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:18:53 crc kubenswrapper[4858]: I1205 14:18:53.767329 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.517131 4858 generic.go:334] "Generic (PLEG): container finished" podID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerID="9806652011cbb22cd69113b15ec6979dc655a3836a4ea1879725fb3fcd2dee5a" exitCode=137 Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.517292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be58e532-6d99-4983-a5bc-f0eeabf75449","Type":"ContainerDied","Data":"9806652011cbb22cd69113b15ec6979dc655a3836a4ea1879725fb3fcd2dee5a"} Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.558082 4858 generic.go:334] "Generic (PLEG): container finished" podID="aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" containerID="f61829d7f9dfcbb3a4fdb6930f130fff9260df20125133b0154454d503a3030f" exitCode=137 Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.558127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01","Type":"ContainerDied","Data":"f61829d7f9dfcbb3a4fdb6930f130fff9260df20125133b0154454d503a3030f"} Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.558154 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01","Type":"ContainerDied","Data":"097e4549cda5009ef9fe1050d77e9641a1e7fe19fbfa5e75784c1f6d693f7e2f"} Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.558163 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097e4549cda5009ef9fe1050d77e9641a1e7fe19fbfa5e75784c1f6d693f7e2f" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.581286 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.681454 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-combined-ca-bundle\") pod \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.682006 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4s5x\" (UniqueName: \"kubernetes.io/projected/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-kube-api-access-h4s5x\") pod \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.682194 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-config-data\") pod \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\" (UID: \"aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.689676 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-kube-api-access-h4s5x" (OuterVolumeSpecName: "kube-api-access-h4s5x") pod "aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" (UID: "aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01"). InnerVolumeSpecName "kube-api-access-h4s5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.712225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-config-data" (OuterVolumeSpecName: "config-data") pod "aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" (UID: "aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.714702 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.747076 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" (UID: "aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.784738 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.784775 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.784795 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4s5x\" (UniqueName: \"kubernetes.io/projected/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01-kube-api-access-h4s5x\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.885580 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-config-data\") pod \"be58e532-6d99-4983-a5bc-f0eeabf75449\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.885814 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md7cw\" (UniqueName: \"kubernetes.io/projected/be58e532-6d99-4983-a5bc-f0eeabf75449-kube-api-access-md7cw\") pod \"be58e532-6d99-4983-a5bc-f0eeabf75449\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.885887 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-combined-ca-bundle\") pod \"be58e532-6d99-4983-a5bc-f0eeabf75449\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.885958 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be58e532-6d99-4983-a5bc-f0eeabf75449-logs\") pod \"be58e532-6d99-4983-a5bc-f0eeabf75449\" (UID: \"be58e532-6d99-4983-a5bc-f0eeabf75449\") " Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.886252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be58e532-6d99-4983-a5bc-f0eeabf75449-logs" (OuterVolumeSpecName: "logs") pod "be58e532-6d99-4983-a5bc-f0eeabf75449" (UID: "be58e532-6d99-4983-a5bc-f0eeabf75449"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.887056 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be58e532-6d99-4983-a5bc-f0eeabf75449-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.889357 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be58e532-6d99-4983-a5bc-f0eeabf75449-kube-api-access-md7cw" (OuterVolumeSpecName: "kube-api-access-md7cw") pod "be58e532-6d99-4983-a5bc-f0eeabf75449" (UID: "be58e532-6d99-4983-a5bc-f0eeabf75449"). InnerVolumeSpecName "kube-api-access-md7cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.915396 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-config-data" (OuterVolumeSpecName: "config-data") pod "be58e532-6d99-4983-a5bc-f0eeabf75449" (UID: "be58e532-6d99-4983-a5bc-f0eeabf75449"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.916524 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be58e532-6d99-4983-a5bc-f0eeabf75449" (UID: "be58e532-6d99-4983-a5bc-f0eeabf75449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.988833 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md7cw\" (UniqueName: \"kubernetes.io/projected/be58e532-6d99-4983-a5bc-f0eeabf75449-kube-api-access-md7cw\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.989073 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:18:59 crc kubenswrapper[4858]: I1205 14:18:59.989139 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be58e532-6d99-4983-a5bc-f0eeabf75449-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.573663 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.574577 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.576069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"be58e532-6d99-4983-a5bc-f0eeabf75449","Type":"ContainerDied","Data":"465ff796a6d9d5fbd471f1c7b211af68e9e7af8c82691a52c7520d88bddf22b8"} Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.576117 4858 scope.go:117] "RemoveContainer" containerID="9806652011cbb22cd69113b15ec6979dc655a3836a4ea1879725fb3fcd2dee5a" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.613599 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.636656 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.654153 4858 scope.go:117] "RemoveContainer" containerID="79a135fe0dfce42d5ef7fa926c4e7b1c45ceb5f7b7c60040950ed8ace5183240" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.660982 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.671022 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.746681 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: E1205 14:19:00.747004 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747015 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 14:19:00 crc kubenswrapper[4858]: E1205 14:19:00.747053 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-log" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747059 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-log" Dec 05 14:19:00 crc kubenswrapper[4858]: E1205 14:19:00.747072 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-metadata" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747078 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-metadata" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747243 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-metadata" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747262 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" containerName="nova-metadata-log" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747268 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.747885 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.750426 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.757480 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.757661 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.760347 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.774920 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.776421 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.781315 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.781940 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.782239 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848457 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvwg\" (UniqueName: \"kubernetes.io/projected/712bc575-1296-4b78-bac7-382867039068-kube-api-access-nmvwg\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848599 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhkx6\" (UniqueName: \"kubernetes.io/projected/090db93c-3100-42e0-976a-574480d21ae9-kube-api-access-fhkx6\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848655 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712bc575-1296-4b78-bac7-382867039068-logs\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848685 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848721 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-config-data\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.848973 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.949984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950238 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhkx6\" (UniqueName: \"kubernetes.io/projected/090db93c-3100-42e0-976a-574480d21ae9-kube-api-access-fhkx6\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712bc575-1296-4b78-bac7-382867039068-logs\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950537 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950626 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950713 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-config-data\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950786 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.950894 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.951002 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvwg\" (UniqueName: \"kubernetes.io/projected/712bc575-1296-4b78-bac7-382867039068-kube-api-access-nmvwg\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.951972 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712bc575-1296-4b78-bac7-382867039068-logs\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.957247 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.965398 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.965569 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.966003 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.967404 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.968293 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/090db93c-3100-42e0-976a-574480d21ae9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.969786 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhkx6\" (UniqueName: \"kubernetes.io/projected/090db93c-3100-42e0-976a-574480d21ae9-kube-api-access-fhkx6\") pod \"nova-cell1-novncproxy-0\" (UID: \"090db93c-3100-42e0-976a-574480d21ae9\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.972518 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-config-data\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:00 crc kubenswrapper[4858]: I1205 14:19:00.991356 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvwg\" (UniqueName: \"kubernetes.io/projected/712bc575-1296-4b78-bac7-382867039068-kube-api-access-nmvwg\") pod \"nova-metadata-0\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " pod="openstack/nova-metadata-0" Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.068288 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.099010 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:01 crc kubenswrapper[4858]: W1205 14:19:01.552045 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod090db93c_3100_42e0_976a_574480d21ae9.slice/crio-29ba0c3d116ae9e0a5040513764955ff629b94b5c8b99906d08ab93dde9a6e9e WatchSource:0}: Error finding container 29ba0c3d116ae9e0a5040513764955ff629b94b5c8b99906d08ab93dde9a6e9e: Status 404 returned error can't find the container with id 29ba0c3d116ae9e0a5040513764955ff629b94b5c8b99906d08ab93dde9a6e9e Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.559760 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.613733 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"090db93c-3100-42e0-976a-574480d21ae9","Type":"ContainerStarted","Data":"29ba0c3d116ae9e0a5040513764955ff629b94b5c8b99906d08ab93dde9a6e9e"} Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.617177 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:01 crc kubenswrapper[4858]: W1205 14:19:01.627005 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod712bc575_1296_4b78_bac7_382867039068.slice/crio-7d4738742817cbd4b17a93e7674e8be9b8268f79a3fe3ab8e8a8945eee42b4ac WatchSource:0}: Error finding container 7d4738742817cbd4b17a93e7674e8be9b8268f79a3fe3ab8e8a8945eee42b4ac: Status 404 returned error can't find the container with id 7d4738742817cbd4b17a93e7674e8be9b8268f79a3fe3ab8e8a8945eee42b4ac Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.911500 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01" path="/var/lib/kubelet/pods/aa69a43a-c77d-4c96-a03e-fc3ab2c5ca01/volumes" Dec 05 14:19:01 crc kubenswrapper[4858]: I1205 14:19:01.913081 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be58e532-6d99-4983-a5bc-f0eeabf75449" path="/var/lib/kubelet/pods/be58e532-6d99-4983-a5bc-f0eeabf75449/volumes" Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.624934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"712bc575-1296-4b78-bac7-382867039068","Type":"ContainerStarted","Data":"2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead"} Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.625258 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"712bc575-1296-4b78-bac7-382867039068","Type":"ContainerStarted","Data":"de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898"} Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.625272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"712bc575-1296-4b78-bac7-382867039068","Type":"ContainerStarted","Data":"7d4738742817cbd4b17a93e7674e8be9b8268f79a3fe3ab8e8a8945eee42b4ac"} Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.627499 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"090db93c-3100-42e0-976a-574480d21ae9","Type":"ContainerStarted","Data":"02c206d42718966af7489dc01679db0ec2a7dd9a867a1c38166d170224107938"} Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.654649 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.654631641 podStartE2EDuration="2.654631641s" podCreationTimestamp="2025-12-05 14:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:02.645068343 +0000 UTC m=+1351.192666492" watchObservedRunningTime="2025-12-05 14:19:02.654631641 +0000 UTC m=+1351.202229780" Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.669626 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.669602565 podStartE2EDuration="2.669602565s" podCreationTimestamp="2025-12-05 14:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:02.665225096 +0000 UTC m=+1351.212823255" watchObservedRunningTime="2025-12-05 14:19:02.669602565 +0000 UTC m=+1351.217200694" Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.687356 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.687987 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.690437 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 14:19:02 crc kubenswrapper[4858]: I1205 14:19:02.690482 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.635601 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.638796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.849542 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f5fdccd57-tfmqv"] Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.858859 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.868269 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f5fdccd57-tfmqv"] Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.928183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-nb\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.928460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-svc\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.928502 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxfv\" (UniqueName: \"kubernetes.io/projected/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-kube-api-access-vhxfv\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.928608 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-config\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.928625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-swift-storage-0\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:03 crc kubenswrapper[4858]: I1205 14:19:03.928720 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-sb\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.029298 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-config\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.029628 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-swift-storage-0\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.029708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-sb\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.029779 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-nb\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.029904 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-svc\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.029930 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxfv\" (UniqueName: \"kubernetes.io/projected/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-kube-api-access-vhxfv\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.030201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-config\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.030877 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-sb\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.031426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-swift-storage-0\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.031364 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-nb\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.032213 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-svc\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.052623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxfv\" (UniqueName: \"kubernetes.io/projected/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-kube-api-access-vhxfv\") pod \"dnsmasq-dns-f5fdccd57-tfmqv\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.183218 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:04 crc kubenswrapper[4858]: I1205 14:19:04.738986 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f5fdccd57-tfmqv"] Dec 05 14:19:04 crc kubenswrapper[4858]: W1205 14:19:04.742167 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee68dfcb_60c6_41ed_b575_4a0f01da7d50.slice/crio-e935c5447ec1429be0ee5c0ce4ae18422250ea4196729f92fd005743dae38f61 WatchSource:0}: Error finding container e935c5447ec1429be0ee5c0ce4ae18422250ea4196729f92fd005743dae38f61: Status 404 returned error can't find the container with id e935c5447ec1429be0ee5c0ce4ae18422250ea4196729f92fd005743dae38f61 Dec 05 14:19:05 crc kubenswrapper[4858]: I1205 14:19:05.657817 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerID="ae61e037bb8d43ce1fc5787f8f869f0316308ebe04e7b802ff3b757eea2c0455" exitCode=0 Dec 05 14:19:05 crc kubenswrapper[4858]: I1205 14:19:05.658483 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" event={"ID":"ee68dfcb-60c6-41ed-b575-4a0f01da7d50","Type":"ContainerDied","Data":"ae61e037bb8d43ce1fc5787f8f869f0316308ebe04e7b802ff3b757eea2c0455"} Dec 05 14:19:05 crc kubenswrapper[4858]: I1205 14:19:05.658816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" event={"ID":"ee68dfcb-60c6-41ed-b575-4a0f01da7d50","Type":"ContainerStarted","Data":"e935c5447ec1429be0ee5c0ce4ae18422250ea4196729f92fd005743dae38f61"} Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.068570 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.099417 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.100324 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.180292 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.180849 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-central-agent" containerID="cri-o://988be034491803f17366da9134c90cc7eac1d628531991868e46e74ff1d1bbea" gracePeriod=30 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.180918 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="sg-core" containerID="cri-o://6ee0e3490476e07937e90fe20ab615bdc63092e35b88b23e4d9c2a03377fe5e3" gracePeriod=30 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.180941 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="proxy-httpd" containerID="cri-o://39171bb805a12e1115a6238679de5a25d82506888d0a79c50e668969367bb87a" gracePeriod=30 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.180964 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-notification-agent" containerID="cri-o://f161ffe02f1f23db59d89464ee061e202c3edd6f50412a1da98c20af98f7ed82" gracePeriod=30 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.197545 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.207:3000/\": EOF" Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.324685 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.670474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" event={"ID":"ee68dfcb-60c6-41ed-b575-4a0f01da7d50","Type":"ContainerStarted","Data":"3145b7ac08465f5178ef031d2e0894b9a7cf69f9fafc96ddbe390e6089461bed"} Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.670659 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674245 4858 generic.go:334] "Generic (PLEG): container finished" podID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerID="39171bb805a12e1115a6238679de5a25d82506888d0a79c50e668969367bb87a" exitCode=0 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674274 4858 generic.go:334] "Generic (PLEG): container finished" podID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerID="6ee0e3490476e07937e90fe20ab615bdc63092e35b88b23e4d9c2a03377fe5e3" exitCode=2 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674291 4858 generic.go:334] "Generic (PLEG): container finished" podID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerID="988be034491803f17366da9134c90cc7eac1d628531991868e46e74ff1d1bbea" exitCode=0 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerDied","Data":"39171bb805a12e1115a6238679de5a25d82506888d0a79c50e668969367bb87a"} Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674352 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerDied","Data":"6ee0e3490476e07937e90fe20ab615bdc63092e35b88b23e4d9c2a03377fe5e3"} Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674365 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerDied","Data":"988be034491803f17366da9134c90cc7eac1d628531991868e46e74ff1d1bbea"} Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674781 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-log" containerID="cri-o://2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501" gracePeriod=30 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.674815 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-api" containerID="cri-o://b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428" gracePeriod=30 Dec 05 14:19:06 crc kubenswrapper[4858]: I1205 14:19:06.706299 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" podStartSLOduration=3.706278229 podStartE2EDuration="3.706278229s" podCreationTimestamp="2025-12-05 14:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:06.69707547 +0000 UTC m=+1355.244673619" watchObservedRunningTime="2025-12-05 14:19:06.706278229 +0000 UTC m=+1355.253876368" Dec 05 14:19:07 crc kubenswrapper[4858]: I1205 14:19:07.685156 4858 generic.go:334] "Generic (PLEG): container finished" podID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerID="2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501" exitCode=143 Dec 05 14:19:07 crc kubenswrapper[4858]: I1205 14:19:07.685228 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd","Type":"ContainerDied","Data":"2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501"} Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.285077 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.447851 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-config-data\") pod \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.448318 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdxh5\" (UniqueName: \"kubernetes.io/projected/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-kube-api-access-cdxh5\") pod \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.448371 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-combined-ca-bundle\") pod \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.448440 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-logs\") pod \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\" (UID: \"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd\") " Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.449058 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-logs" (OuterVolumeSpecName: "logs") pod "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" (UID: "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.449660 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.479039 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-kube-api-access-cdxh5" (OuterVolumeSpecName: "kube-api-access-cdxh5") pod "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" (UID: "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd"). InnerVolumeSpecName "kube-api-access-cdxh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.484309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-config-data" (OuterVolumeSpecName: "config-data") pod "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" (UID: "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.492026 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" (UID: "a2b4dba8-9fca-453f-97f3-ae420cb6fdcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.551383 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.551417 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdxh5\" (UniqueName: \"kubernetes.io/projected/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-kube-api-access-cdxh5\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.551428 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.721271 4858 generic.go:334] "Generic (PLEG): container finished" podID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerID="b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428" exitCode=0 Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.721309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd","Type":"ContainerDied","Data":"b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428"} Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.721334 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b4dba8-9fca-453f-97f3-ae420cb6fdcd","Type":"ContainerDied","Data":"9456d6ecf55e4dc3e53ee13b838423845f98e7506f8bae78e2a867704d0438b1"} Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.721349 4858 scope.go:117] "RemoveContainer" containerID="b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.721471 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.757569 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.757672 4858 scope.go:117] "RemoveContainer" containerID="2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.772610 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.787100 4858 scope.go:117] "RemoveContainer" containerID="b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428" Dec 05 14:19:10 crc kubenswrapper[4858]: E1205 14:19:10.789386 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428\": container with ID starting with b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428 not found: ID does not exist" containerID="b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.789425 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428"} err="failed to get container status \"b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428\": rpc error: code = NotFound desc = could not find container \"b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428\": container with ID starting with b470d3520fc8d093fa60b3c7750e02a6324c2dee317286747df9d5aa8947d428 not found: ID does not exist" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.789449 4858 scope.go:117] "RemoveContainer" containerID="2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501" Dec 05 14:19:10 crc kubenswrapper[4858]: E1205 14:19:10.789742 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501\": container with ID starting with 2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501 not found: ID does not exist" containerID="2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.789765 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501"} err="failed to get container status \"2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501\": rpc error: code = NotFound desc = could not find container \"2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501\": container with ID starting with 2663fa950a4b28371daf6fd309597ab73926220d399e880b3e76b11a68004501 not found: ID does not exist" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.790960 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:10 crc kubenswrapper[4858]: E1205 14:19:10.791448 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-api" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.791472 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-api" Dec 05 14:19:10 crc kubenswrapper[4858]: E1205 14:19:10.791492 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-log" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.791497 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-log" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.791690 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-log" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.791706 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" containerName="nova-api-api" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.793280 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.797669 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.797996 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.798268 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.802325 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.959953 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-config-data\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.960326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.960469 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.960499 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45pt\" (UniqueName: \"kubernetes.io/projected/a1f0c314-0384-474a-bc4a-33bb79b0198c-kube-api-access-t45pt\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.960588 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:10 crc kubenswrapper[4858]: I1205 14:19:10.960612 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f0c314-0384-474a-bc4a-33bb79b0198c-logs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061143 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-config-data\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061210 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061309 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45pt\" (UniqueName: \"kubernetes.io/projected/a1f0c314-0384-474a-bc4a-33bb79b0198c-kube-api-access-t45pt\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061411 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061438 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f0c314-0384-474a-bc4a-33bb79b0198c-logs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.061949 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f0c314-0384-474a-bc4a-33bb79b0198c-logs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.069877 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.074498 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-config-data\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.086437 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45pt\" (UniqueName: \"kubernetes.io/projected/a1f0c314-0384-474a-bc4a-33bb79b0198c-kube-api-access-t45pt\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.091427 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.092630 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.095414 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.099190 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.099232 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.111271 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.289249 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.746595 4858 generic.go:334] "Generic (PLEG): container finished" podID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerID="f161ffe02f1f23db59d89464ee061e202c3edd6f50412a1da98c20af98f7ed82" exitCode=0 Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.746795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerDied","Data":"f161ffe02f1f23db59d89464ee061e202c3edd6f50412a1da98c20af98f7ed82"} Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.763913 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:11 crc kubenswrapper[4858]: W1205 14:19:11.776922 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1f0c314_0384_474a_bc4a_33bb79b0198c.slice/crio-4a5a7f0519720332cc5b20909f653929d2f790c0a90d498c6ca5346449ccbdd4 WatchSource:0}: Error finding container 4a5a7f0519720332cc5b20909f653929d2f790c0a90d498c6ca5346449ccbdd4: Status 404 returned error can't find the container with id 4a5a7f0519720332cc5b20909f653929d2f790c0a90d498c6ca5346449ccbdd4 Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.780707 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.919779 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b4dba8-9fca-453f-97f3-ae420cb6fdcd" path="/var/lib/kubelet/pods/a2b4dba8-9fca-453f-97f3-ae420cb6fdcd/volumes" Dec 05 14:19:11 crc kubenswrapper[4858]: I1205 14:19:11.971461 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.050882 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-b5cvh"] Dec 05 14:19:12 crc kubenswrapper[4858]: E1205 14:19:12.051342 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-central-agent" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-central-agent" Dec 05 14:19:12 crc kubenswrapper[4858]: E1205 14:19:12.051377 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="sg-core" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051383 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="sg-core" Dec 05 14:19:12 crc kubenswrapper[4858]: E1205 14:19:12.051414 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="proxy-httpd" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051420 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="proxy-httpd" Dec 05 14:19:12 crc kubenswrapper[4858]: E1205 14:19:12.051429 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-notification-agent" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051435 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-notification-agent" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051625 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-notification-agent" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051637 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="ceilometer-central-agent" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051653 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="sg-core" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.051670 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" containerName="proxy-httpd" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.052364 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.056272 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.056490 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.071981 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-b5cvh"] Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.086999 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-sg-core-conf-yaml\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz7s5\" (UniqueName: \"kubernetes.io/projected/fe4e798c-be06-4822-9be5-5a3636c523c7-kube-api-access-vz7s5\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087296 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-ceilometer-tls-certs\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087341 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-log-httpd\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087371 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-run-httpd\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-scripts\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087548 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-config-data\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.087620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-combined-ca-bundle\") pod \"fe4e798c-be06-4822-9be5-5a3636c523c7\" (UID: \"fe4e798c-be06-4822-9be5-5a3636c523c7\") " Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.089314 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.090262 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.091042 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.105113 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4e798c-be06-4822-9be5-5a3636c523c7-kube-api-access-vz7s5" (OuterVolumeSpecName: "kube-api-access-vz7s5") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "kube-api-access-vz7s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.110002 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.110006 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.119752 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-scripts" (OuterVolumeSpecName: "scripts") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.135753 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.165219 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.195708 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-scripts\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.195785 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.195816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/4253ece2-408f-490c-82ef-56b8ae47aa21-kube-api-access-rlxcf\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.195958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-config-data\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.196196 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.196215 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz7s5\" (UniqueName: \"kubernetes.io/projected/fe4e798c-be06-4822-9be5-5a3636c523c7-kube-api-access-vz7s5\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.196227 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.196237 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe4e798c-be06-4822-9be5-5a3636c523c7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.196245 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.220343 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.243580 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-config-data" (OuterVolumeSpecName: "config-data") pod "fe4e798c-be06-4822-9be5-5a3636c523c7" (UID: "fe4e798c-be06-4822-9be5-5a3636c523c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.297698 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.297756 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/4253ece2-408f-490c-82ef-56b8ae47aa21-kube-api-access-rlxcf\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.297794 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-config-data\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.297962 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-scripts\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.298010 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.298020 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4e798c-be06-4822-9be5-5a3636c523c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.302931 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-config-data\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.303590 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-scripts\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.307366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.317861 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/4253ece2-408f-490c-82ef-56b8ae47aa21-kube-api-access-rlxcf\") pod \"nova-cell1-cell-mapping-b5cvh\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.380624 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.797299 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe4e798c-be06-4822-9be5-5a3636c523c7","Type":"ContainerDied","Data":"2744f52fed133c837191daf37a10f6529540168fe1f16df4f7557d11c72bf09c"} Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.797951 4858 scope.go:117] "RemoveContainer" containerID="39171bb805a12e1115a6238679de5a25d82506888d0a79c50e668969367bb87a" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.798180 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.815528 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1f0c314-0384-474a-bc4a-33bb79b0198c","Type":"ContainerStarted","Data":"1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d"} Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.815566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1f0c314-0384-474a-bc4a-33bb79b0198c","Type":"ContainerStarted","Data":"6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964"} Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.815581 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1f0c314-0384-474a-bc4a-33bb79b0198c","Type":"ContainerStarted","Data":"4a5a7f0519720332cc5b20909f653929d2f790c0a90d498c6ca5346449ccbdd4"} Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.846713 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.846691329 podStartE2EDuration="2.846691329s" podCreationTimestamp="2025-12-05 14:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:12.832515148 +0000 UTC m=+1361.380113297" watchObservedRunningTime="2025-12-05 14:19:12.846691329 +0000 UTC m=+1361.394289468" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.861556 4858 scope.go:117] "RemoveContainer" containerID="6ee0e3490476e07937e90fe20ab615bdc63092e35b88b23e4d9c2a03377fe5e3" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.872723 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.881251 4858 scope.go:117] "RemoveContainer" containerID="f161ffe02f1f23db59d89464ee061e202c3edd6f50412a1da98c20af98f7ed82" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.883603 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.901802 4858 scope.go:117] "RemoveContainer" containerID="988be034491803f17366da9134c90cc7eac1d628531991868e46e74ff1d1bbea" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.935983 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.938859 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.947463 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.947673 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.947805 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.953095 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:19:12 crc kubenswrapper[4858]: I1205 14:19:12.977909 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-b5cvh"] Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.016217 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.016434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.016519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.016700 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.016778 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlp7\" (UniqueName: \"kubernetes.io/projected/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-kube-api-access-7nlp7\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.017134 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.017313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-config-data\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.017434 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-scripts\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.119477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-scripts\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.119545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.119574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.120420 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.120454 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.120476 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlp7\" (UniqueName: \"kubernetes.io/projected/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-kube-api-access-7nlp7\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.120524 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.120574 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-config-data\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.124254 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-config-data\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.126948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-log-httpd\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.127259 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-run-httpd\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.128963 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-scripts\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.131354 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.132174 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.133870 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.141276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlp7\" (UniqueName: \"kubernetes.io/projected/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-kube-api-access-7nlp7\") pod \"ceilometer-0\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.290270 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.780740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 14:19:13 crc kubenswrapper[4858]: W1205 14:19:13.782414 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdb5a7f0_22c2_43a9_86f2_0c70c966c6ba.slice/crio-ad16ddbdf490d16d5c7e577b39bbbbf9ed69f50c4ba02592faab7bfab7d89859 WatchSource:0}: Error finding container ad16ddbdf490d16d5c7e577b39bbbbf9ed69f50c4ba02592faab7bfab7d89859: Status 404 returned error can't find the container with id ad16ddbdf490d16d5c7e577b39bbbbf9ed69f50c4ba02592faab7bfab7d89859 Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.835672 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b5cvh" event={"ID":"4253ece2-408f-490c-82ef-56b8ae47aa21","Type":"ContainerStarted","Data":"7412c6ab7b334406184aa1471effa37d196ed5504abd0e29f1cf2dfedc69628f"} Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.835724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b5cvh" event={"ID":"4253ece2-408f-490c-82ef-56b8ae47aa21","Type":"ContainerStarted","Data":"4ea5912752f737845490a98cb97256981feedf6d07234cbdcb5330eef340bb7a"} Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.855868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerStarted","Data":"ad16ddbdf490d16d5c7e577b39bbbbf9ed69f50c4ba02592faab7bfab7d89859"} Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.858847 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-b5cvh" podStartSLOduration=2.858810422 podStartE2EDuration="2.858810422s" podCreationTimestamp="2025-12-05 14:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:13.853772826 +0000 UTC m=+1362.401370965" watchObservedRunningTime="2025-12-05 14:19:13.858810422 +0000 UTC m=+1362.406408561" Dec 05 14:19:13 crc kubenswrapper[4858]: I1205 14:19:13.914741 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe4e798c-be06-4822-9be5-5a3636c523c7" path="/var/lib/kubelet/pods/fe4e798c-be06-4822-9be5-5a3636c523c7/volumes" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.185865 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.253766 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbf48cbcc-jszgr"] Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.254037 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" containerName="dnsmasq-dns" containerID="cri-o://16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181" gracePeriod=10 Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.733078 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.864446 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-sb\") pod \"a6049824-ca90-4452-988c-19c7fa7117f9\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.864493 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-svc\") pod \"a6049824-ca90-4452-988c-19c7fa7117f9\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.864522 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-nb\") pod \"a6049824-ca90-4452-988c-19c7fa7117f9\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.864561 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z49n\" (UniqueName: \"kubernetes.io/projected/a6049824-ca90-4452-988c-19c7fa7117f9-kube-api-access-6z49n\") pod \"a6049824-ca90-4452-988c-19c7fa7117f9\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.864670 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-config\") pod \"a6049824-ca90-4452-988c-19c7fa7117f9\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.864724 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-swift-storage-0\") pod \"a6049824-ca90-4452-988c-19c7fa7117f9\" (UID: \"a6049824-ca90-4452-988c-19c7fa7117f9\") " Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.869228 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6049824-ca90-4452-988c-19c7fa7117f9-kube-api-access-6z49n" (OuterVolumeSpecName: "kube-api-access-6z49n") pod "a6049824-ca90-4452-988c-19c7fa7117f9" (UID: "a6049824-ca90-4452-988c-19c7fa7117f9"). InnerVolumeSpecName "kube-api-access-6z49n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.884089 4858 generic.go:334] "Generic (PLEG): container finished" podID="a6049824-ca90-4452-988c-19c7fa7117f9" containerID="16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181" exitCode=0 Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.884178 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" event={"ID":"a6049824-ca90-4452-988c-19c7fa7117f9","Type":"ContainerDied","Data":"16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181"} Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.884206 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" event={"ID":"a6049824-ca90-4452-988c-19c7fa7117f9","Type":"ContainerDied","Data":"a00221adc11bea83fd242b60f5735d5f45a84746c7d86d303aaba1a79988dc18"} Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.884222 4858 scope.go:117] "RemoveContainer" containerID="16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.884351 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbf48cbcc-jszgr" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.908291 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerStarted","Data":"8eadbbd2abb1905af14eb90a333add42d0e3bd86326e6a86fbf70df4b23b02d3"} Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.908337 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerStarted","Data":"6bee8fb279de218cea32c6d04cd6b0cb46d74c41e4453011ed87d6b58ee12166"} Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.947175 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6049824-ca90-4452-988c-19c7fa7117f9" (UID: "a6049824-ca90-4452-988c-19c7fa7117f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.954257 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6049824-ca90-4452-988c-19c7fa7117f9" (UID: "a6049824-ca90-4452-988c-19c7fa7117f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.961415 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6049824-ca90-4452-988c-19c7fa7117f9" (UID: "a6049824-ca90-4452-988c-19c7fa7117f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.967216 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.967244 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.967254 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.967262 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z49n\" (UniqueName: \"kubernetes.io/projected/a6049824-ca90-4452-988c-19c7fa7117f9-kube-api-access-6z49n\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.972903 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-config" (OuterVolumeSpecName: "config") pod "a6049824-ca90-4452-988c-19c7fa7117f9" (UID: "a6049824-ca90-4452-988c-19c7fa7117f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:19:14 crc kubenswrapper[4858]: I1205 14:19:14.973260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6049824-ca90-4452-988c-19c7fa7117f9" (UID: "a6049824-ca90-4452-988c-19c7fa7117f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.069236 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.069271 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6049824-ca90-4452-988c-19c7fa7117f9-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.108799 4858 scope.go:117] "RemoveContainer" containerID="ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.129223 4858 scope.go:117] "RemoveContainer" containerID="16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181" Dec 05 14:19:15 crc kubenswrapper[4858]: E1205 14:19:15.130264 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181\": container with ID starting with 16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181 not found: ID does not exist" containerID="16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.130321 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181"} err="failed to get container status \"16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181\": rpc error: code = NotFound desc = could not find container \"16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181\": container with ID starting with 16ac0f0f3d597f74928e16d157e8bd6d10c2bd876716ca206b13d1015eff0181 not found: ID does not exist" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.130347 4858 scope.go:117] "RemoveContainer" containerID="ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c" Dec 05 14:19:15 crc kubenswrapper[4858]: E1205 14:19:15.130692 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c\": container with ID starting with ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c not found: ID does not exist" containerID="ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.130718 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c"} err="failed to get container status \"ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c\": rpc error: code = NotFound desc = could not find container \"ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c\": container with ID starting with ccde73b6172a7440c2bee874a808b09b428f4acbb2a90ca18cc8317c333ef01c not found: ID does not exist" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.215251 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbf48cbcc-jszgr"] Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.223720 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbf48cbcc-jszgr"] Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.937607 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" path="/var/lib/kubelet/pods/a6049824-ca90-4452-988c-19c7fa7117f9/volumes" Dec 05 14:19:15 crc kubenswrapper[4858]: I1205 14:19:15.940081 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerStarted","Data":"f751d4ff62041ede6966fcbc96230a1c1b6829556d737a8510353c8f90e3f866"} Dec 05 14:19:16 crc kubenswrapper[4858]: I1205 14:19:16.952035 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerStarted","Data":"bbef305c73922336c39bc4a6af66b38c55611fca825f65d600f338e1b67a82d5"} Dec 05 14:19:16 crc kubenswrapper[4858]: I1205 14:19:16.953421 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 14:19:16 crc kubenswrapper[4858]: I1205 14:19:16.976056 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.337240869 podStartE2EDuration="4.976024071s" podCreationTimestamp="2025-12-05 14:19:12 +0000 UTC" firstStartedPulling="2025-12-05 14:19:13.785754175 +0000 UTC m=+1362.333352314" lastFinishedPulling="2025-12-05 14:19:16.424537377 +0000 UTC m=+1364.972135516" observedRunningTime="2025-12-05 14:19:16.974381527 +0000 UTC m=+1365.521979666" watchObservedRunningTime="2025-12-05 14:19:16.976024071 +0000 UTC m=+1365.523622210" Dec 05 14:19:18 crc kubenswrapper[4858]: I1205 14:19:18.973031 4858 generic.go:334] "Generic (PLEG): container finished" podID="4253ece2-408f-490c-82ef-56b8ae47aa21" containerID="7412c6ab7b334406184aa1471effa37d196ed5504abd0e29f1cf2dfedc69628f" exitCode=0 Dec 05 14:19:18 crc kubenswrapper[4858]: I1205 14:19:18.973107 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b5cvh" event={"ID":"4253ece2-408f-490c-82ef-56b8ae47aa21","Type":"ContainerDied","Data":"7412c6ab7b334406184aa1471effa37d196ed5504abd0e29f1cf2dfedc69628f"} Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.489863 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.603937 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/4253ece2-408f-490c-82ef-56b8ae47aa21-kube-api-access-rlxcf\") pod \"4253ece2-408f-490c-82ef-56b8ae47aa21\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.604039 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-config-data\") pod \"4253ece2-408f-490c-82ef-56b8ae47aa21\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.604072 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-scripts\") pod \"4253ece2-408f-490c-82ef-56b8ae47aa21\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.604179 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-combined-ca-bundle\") pod \"4253ece2-408f-490c-82ef-56b8ae47aa21\" (UID: \"4253ece2-408f-490c-82ef-56b8ae47aa21\") " Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.612016 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-scripts" (OuterVolumeSpecName: "scripts") pod "4253ece2-408f-490c-82ef-56b8ae47aa21" (UID: "4253ece2-408f-490c-82ef-56b8ae47aa21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.612144 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4253ece2-408f-490c-82ef-56b8ae47aa21-kube-api-access-rlxcf" (OuterVolumeSpecName: "kube-api-access-rlxcf") pod "4253ece2-408f-490c-82ef-56b8ae47aa21" (UID: "4253ece2-408f-490c-82ef-56b8ae47aa21"). InnerVolumeSpecName "kube-api-access-rlxcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.637034 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-config-data" (OuterVolumeSpecName: "config-data") pod "4253ece2-408f-490c-82ef-56b8ae47aa21" (UID: "4253ece2-408f-490c-82ef-56b8ae47aa21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.643520 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4253ece2-408f-490c-82ef-56b8ae47aa21" (UID: "4253ece2-408f-490c-82ef-56b8ae47aa21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.707553 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlxcf\" (UniqueName: \"kubernetes.io/projected/4253ece2-408f-490c-82ef-56b8ae47aa21-kube-api-access-rlxcf\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.707615 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.707646 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.707656 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4253ece2-408f-490c-82ef-56b8ae47aa21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.993433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b5cvh" event={"ID":"4253ece2-408f-490c-82ef-56b8ae47aa21","Type":"ContainerDied","Data":"4ea5912752f737845490a98cb97256981feedf6d07234cbdcb5330eef340bb7a"} Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.993473 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea5912752f737845490a98cb97256981feedf6d07234cbdcb5330eef340bb7a" Dec 05 14:19:20 crc kubenswrapper[4858]: I1205 14:19:20.993525 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b5cvh" Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.104007 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.109177 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.109847 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.112170 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.112230 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.237714 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.246334 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.246593 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" containerName="nova-scheduler-scheduler" containerID="cri-o://44e4ebfdfbc76e1a65b9d59a3b205a562cd83bd457e4550a5402a7410265db85" gracePeriod=30 Dec 05 14:19:21 crc kubenswrapper[4858]: I1205 14:19:21.269921 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.009446 4858 generic.go:334] "Generic (PLEG): container finished" podID="f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" containerID="44e4ebfdfbc76e1a65b9d59a3b205a562cd83bd457e4550a5402a7410265db85" exitCode=0 Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.009972 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-log" containerID="cri-o://6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964" gracePeriod=30 Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.010303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97","Type":"ContainerDied","Data":"44e4ebfdfbc76e1a65b9d59a3b205a562cd83bd457e4550a5402a7410265db85"} Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.011610 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-api" containerID="cri-o://1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d" gracePeriod=30 Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.018287 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": EOF" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.018301 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": EOF" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.022504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.367020 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.553975 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c2m8\" (UniqueName: \"kubernetes.io/projected/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-kube-api-access-4c2m8\") pod \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.554107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-combined-ca-bundle\") pod \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.554186 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-config-data\") pod \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\" (UID: \"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97\") " Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.559730 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-kube-api-access-4c2m8" (OuterVolumeSpecName: "kube-api-access-4c2m8") pod "f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" (UID: "f474cab7-bfb5-448a-b1d1-1faa6c9d2b97"). InnerVolumeSpecName "kube-api-access-4c2m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.583476 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-config-data" (OuterVolumeSpecName: "config-data") pod "f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" (UID: "f474cab7-bfb5-448a-b1d1-1faa6c9d2b97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.589001 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" (UID: "f474cab7-bfb5-448a-b1d1-1faa6c9d2b97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.656966 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c2m8\" (UniqueName: \"kubernetes.io/projected/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-kube-api-access-4c2m8\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.656996 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:22 crc kubenswrapper[4858]: I1205 14:19:22.657006 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.020966 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.020967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f474cab7-bfb5-448a-b1d1-1faa6c9d2b97","Type":"ContainerDied","Data":"34859beba9ece1dd46114c0906e83f960bf934aed4fa098208e3b0b1d84c1059"} Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.021476 4858 scope.go:117] "RemoveContainer" containerID="44e4ebfdfbc76e1a65b9d59a3b205a562cd83bd457e4550a5402a7410265db85" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.023028 4858 generic.go:334] "Generic (PLEG): container finished" podID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerID="6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964" exitCode=143 Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.023110 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1f0c314-0384-474a-bc4a-33bb79b0198c","Type":"ContainerDied","Data":"6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964"} Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.023214 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-log" containerID="cri-o://de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898" gracePeriod=30 Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.023250 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-metadata" containerID="cri-o://2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead" gracePeriod=30 Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.066766 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.090995 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104165 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:19:23 crc kubenswrapper[4858]: E1205 14:19:23.104583 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" containerName="nova-scheduler-scheduler" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104599 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" containerName="nova-scheduler-scheduler" Dec 05 14:19:23 crc kubenswrapper[4858]: E1205 14:19:23.104620 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" containerName="init" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104627 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" containerName="init" Dec 05 14:19:23 crc kubenswrapper[4858]: E1205 14:19:23.104643 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4253ece2-408f-490c-82ef-56b8ae47aa21" containerName="nova-manage" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104650 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4253ece2-408f-490c-82ef-56b8ae47aa21" containerName="nova-manage" Dec 05 14:19:23 crc kubenswrapper[4858]: E1205 14:19:23.104668 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" containerName="dnsmasq-dns" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104675 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" containerName="dnsmasq-dns" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104881 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" containerName="nova-scheduler-scheduler" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104893 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6049824-ca90-4452-988c-19c7fa7117f9" containerName="dnsmasq-dns" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.104909 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4253ece2-408f-490c-82ef-56b8ae47aa21" containerName="nova-manage" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.105610 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.108321 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.119855 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.276287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80feadb-198b-4d10-879d-6ce206658a84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.277133 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lv6\" (UniqueName: \"kubernetes.io/projected/b80feadb-198b-4d10-879d-6ce206658a84-kube-api-access-w6lv6\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.277206 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80feadb-198b-4d10-879d-6ce206658a84-config-data\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.378526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80feadb-198b-4d10-879d-6ce206658a84-config-data\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.378625 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80feadb-198b-4d10-879d-6ce206658a84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.378726 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lv6\" (UniqueName: \"kubernetes.io/projected/b80feadb-198b-4d10-879d-6ce206658a84-kube-api-access-w6lv6\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.383658 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b80feadb-198b-4d10-879d-6ce206658a84-config-data\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.385019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b80feadb-198b-4d10-879d-6ce206658a84-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.403781 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lv6\" (UniqueName: \"kubernetes.io/projected/b80feadb-198b-4d10-879d-6ce206658a84-kube-api-access-w6lv6\") pod \"nova-scheduler-0\" (UID: \"b80feadb-198b-4d10-879d-6ce206658a84\") " pod="openstack/nova-scheduler-0" Dec 05 14:19:23 crc kubenswrapper[4858]: I1205 14:19:23.495570 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 14:19:24 crc kubenswrapper[4858]: I1205 14:19:23.911253 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f474cab7-bfb5-448a-b1d1-1faa6c9d2b97" path="/var/lib/kubelet/pods/f474cab7-bfb5-448a-b1d1-1faa6c9d2b97/volumes" Dec 05 14:19:24 crc kubenswrapper[4858]: W1205 14:19:24.013765 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb80feadb_198b_4d10_879d_6ce206658a84.slice/crio-4f4978db4ebbdcfa9dc5fb3188c62338fa33be49fdb1dbb52edbcfb5db6eb3cb WatchSource:0}: Error finding container 4f4978db4ebbdcfa9dc5fb3188c62338fa33be49fdb1dbb52edbcfb5db6eb3cb: Status 404 returned error can't find the container with id 4f4978db4ebbdcfa9dc5fb3188c62338fa33be49fdb1dbb52edbcfb5db6eb3cb Dec 05 14:19:24 crc kubenswrapper[4858]: I1205 14:19:24.015284 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 14:19:24 crc kubenswrapper[4858]: I1205 14:19:24.034694 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b80feadb-198b-4d10-879d-6ce206658a84","Type":"ContainerStarted","Data":"4f4978db4ebbdcfa9dc5fb3188c62338fa33be49fdb1dbb52edbcfb5db6eb3cb"} Dec 05 14:19:24 crc kubenswrapper[4858]: I1205 14:19:24.038506 4858 generic.go:334] "Generic (PLEG): container finished" podID="712bc575-1296-4b78-bac7-382867039068" containerID="de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898" exitCode=143 Dec 05 14:19:24 crc kubenswrapper[4858]: I1205 14:19:24.038569 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"712bc575-1296-4b78-bac7-382867039068","Type":"ContainerDied","Data":"de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898"} Dec 05 14:19:25 crc kubenswrapper[4858]: I1205 14:19:25.056314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b80feadb-198b-4d10-879d-6ce206658a84","Type":"ContainerStarted","Data":"bfccb63fcbfd085cb67674fce6eb41d4391bdc8d06aec14c5343edae74fb9409"} Dec 05 14:19:25 crc kubenswrapper[4858]: I1205 14:19:25.079345 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.07931581 podStartE2EDuration="2.07931581s" podCreationTimestamp="2025-12-05 14:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:25.079236408 +0000 UTC m=+1373.626834547" watchObservedRunningTime="2025-12-05 14:19:25.07931581 +0000 UTC m=+1373.626913949" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.159742 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:38864->10.217.0.209:8775: read: connection reset by peer" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.159793 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.209:8775/\": read tcp 10.217.0.2:38868->10.217.0.209:8775: read: connection reset by peer" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.665534 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.677202 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-config-data\") pod \"712bc575-1296-4b78-bac7-382867039068\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.677249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-combined-ca-bundle\") pod \"712bc575-1296-4b78-bac7-382867039068\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.677332 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-nova-metadata-tls-certs\") pod \"712bc575-1296-4b78-bac7-382867039068\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.677366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712bc575-1296-4b78-bac7-382867039068-logs\") pod \"712bc575-1296-4b78-bac7-382867039068\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.677527 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmvwg\" (UniqueName: \"kubernetes.io/projected/712bc575-1296-4b78-bac7-382867039068-kube-api-access-nmvwg\") pod \"712bc575-1296-4b78-bac7-382867039068\" (UID: \"712bc575-1296-4b78-bac7-382867039068\") " Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.678960 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712bc575-1296-4b78-bac7-382867039068-logs" (OuterVolumeSpecName: "logs") pod "712bc575-1296-4b78-bac7-382867039068" (UID: "712bc575-1296-4b78-bac7-382867039068"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.705094 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712bc575-1296-4b78-bac7-382867039068-kube-api-access-nmvwg" (OuterVolumeSpecName: "kube-api-access-nmvwg") pod "712bc575-1296-4b78-bac7-382867039068" (UID: "712bc575-1296-4b78-bac7-382867039068"). InnerVolumeSpecName "kube-api-access-nmvwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.730533 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "712bc575-1296-4b78-bac7-382867039068" (UID: "712bc575-1296-4b78-bac7-382867039068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.733902 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-config-data" (OuterVolumeSpecName: "config-data") pod "712bc575-1296-4b78-bac7-382867039068" (UID: "712bc575-1296-4b78-bac7-382867039068"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.783171 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmvwg\" (UniqueName: \"kubernetes.io/projected/712bc575-1296-4b78-bac7-382867039068-kube-api-access-nmvwg\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.783287 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.783361 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.783429 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/712bc575-1296-4b78-bac7-382867039068-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.792007 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "712bc575-1296-4b78-bac7-382867039068" (UID: "712bc575-1296-4b78-bac7-382867039068"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:26 crc kubenswrapper[4858]: I1205 14:19:26.884915 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/712bc575-1296-4b78-bac7-382867039068-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.074625 4858 generic.go:334] "Generic (PLEG): container finished" podID="712bc575-1296-4b78-bac7-382867039068" containerID="2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead" exitCode=0 Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.074688 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"712bc575-1296-4b78-bac7-382867039068","Type":"ContainerDied","Data":"2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead"} Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.074716 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.074746 4858 scope.go:117] "RemoveContainer" containerID="2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.074720 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"712bc575-1296-4b78-bac7-382867039068","Type":"ContainerDied","Data":"7d4738742817cbd4b17a93e7674e8be9b8268f79a3fe3ab8e8a8945eee42b4ac"} Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.094077 4858 scope.go:117] "RemoveContainer" containerID="de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.111361 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.120547 4858 scope.go:117] "RemoveContainer" containerID="2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead" Dec 05 14:19:27 crc kubenswrapper[4858]: E1205 14:19:27.121411 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead\": container with ID starting with 2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead not found: ID does not exist" containerID="2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.121458 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead"} err="failed to get container status \"2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead\": rpc error: code = NotFound desc = could not find container \"2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead\": container with ID starting with 2992776b94eb5220ba2e14d8f9f878f5b9d2daeaf5ca6fba043a86fdfaec5ead not found: ID does not exist" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.121488 4858 scope.go:117] "RemoveContainer" containerID="de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.121784 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:27 crc kubenswrapper[4858]: E1205 14:19:27.121967 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898\": container with ID starting with de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898 not found: ID does not exist" containerID="de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.121996 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898"} err="failed to get container status \"de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898\": rpc error: code = NotFound desc = could not find container \"de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898\": container with ID starting with de86864e4ed9ab8ee8732269b1b646704d3de05f3af58d3f923e99bd4fbdc898 not found: ID does not exist" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.138476 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:27 crc kubenswrapper[4858]: E1205 14:19:27.138987 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-log" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.139009 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-log" Dec 05 14:19:27 crc kubenswrapper[4858]: E1205 14:19:27.139034 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-metadata" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.139042 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-metadata" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.139262 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-log" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.139413 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="712bc575-1296-4b78-bac7-382867039068" containerName="nova-metadata-metadata" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.140625 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.146803 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.147098 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.193491 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.194632 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.194674 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88978087-6caa-487b-8425-40fc1b70ced8-logs\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.194722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-config-data\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.194752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtcc\" (UniqueName: \"kubernetes.io/projected/88978087-6caa-487b-8425-40fc1b70ced8-kube-api-access-jqtcc\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.198954 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.302025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.302349 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.302428 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88978087-6caa-487b-8425-40fc1b70ced8-logs\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.302508 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-config-data\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.302589 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtcc\" (UniqueName: \"kubernetes.io/projected/88978087-6caa-487b-8425-40fc1b70ced8-kube-api-access-jqtcc\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.303033 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88978087-6caa-487b-8425-40fc1b70ced8-logs\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.308706 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.308747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.311592 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88978087-6caa-487b-8425-40fc1b70ced8-config-data\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.329200 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtcc\" (UniqueName: \"kubernetes.io/projected/88978087-6caa-487b-8425-40fc1b70ced8-kube-api-access-jqtcc\") pod \"nova-metadata-0\" (UID: \"88978087-6caa-487b-8425-40fc1b70ced8\") " pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.471150 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.895205 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 14:19:27 crc kubenswrapper[4858]: W1205 14:19:27.898112 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88978087_6caa_487b_8425_40fc1b70ced8.slice/crio-49867719fa23a75c83a9c255afe7b0d8fd1603ed74997298662e24a59e95820f WatchSource:0}: Error finding container 49867719fa23a75c83a9c255afe7b0d8fd1603ed74997298662e24a59e95820f: Status 404 returned error can't find the container with id 49867719fa23a75c83a9c255afe7b0d8fd1603ed74997298662e24a59e95820f Dec 05 14:19:27 crc kubenswrapper[4858]: I1205 14:19:27.909778 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712bc575-1296-4b78-bac7-382867039068" path="/var/lib/kubelet/pods/712bc575-1296-4b78-bac7-382867039068/volumes" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.085109 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88978087-6caa-487b-8425-40fc1b70ced8","Type":"ContainerStarted","Data":"729da619a589c04c7ff5958dfb9a3eef758238e5ffdc3a32ad3d8c12f3b61061"} Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.085397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88978087-6caa-487b-8425-40fc1b70ced8","Type":"ContainerStarted","Data":"49867719fa23a75c83a9c255afe7b0d8fd1603ed74997298662e24a59e95820f"} Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.496519 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.861228 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.928179 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-combined-ca-bundle\") pod \"a1f0c314-0384-474a-bc4a-33bb79b0198c\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.928449 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f0c314-0384-474a-bc4a-33bb79b0198c-logs\") pod \"a1f0c314-0384-474a-bc4a-33bb79b0198c\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.928609 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-config-data\") pod \"a1f0c314-0384-474a-bc4a-33bb79b0198c\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.928804 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-internal-tls-certs\") pod \"a1f0c314-0384-474a-bc4a-33bb79b0198c\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.928938 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t45pt\" (UniqueName: \"kubernetes.io/projected/a1f0c314-0384-474a-bc4a-33bb79b0198c-kube-api-access-t45pt\") pod \"a1f0c314-0384-474a-bc4a-33bb79b0198c\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.929055 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-public-tls-certs\") pod \"a1f0c314-0384-474a-bc4a-33bb79b0198c\" (UID: \"a1f0c314-0384-474a-bc4a-33bb79b0198c\") " Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.929229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1f0c314-0384-474a-bc4a-33bb79b0198c-logs" (OuterVolumeSpecName: "logs") pod "a1f0c314-0384-474a-bc4a-33bb79b0198c" (UID: "a1f0c314-0384-474a-bc4a-33bb79b0198c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.929838 4858 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f0c314-0384-474a-bc4a-33bb79b0198c-logs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.934089 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f0c314-0384-474a-bc4a-33bb79b0198c-kube-api-access-t45pt" (OuterVolumeSpecName: "kube-api-access-t45pt") pod "a1f0c314-0384-474a-bc4a-33bb79b0198c" (UID: "a1f0c314-0384-474a-bc4a-33bb79b0198c"). InnerVolumeSpecName "kube-api-access-t45pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.958332 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1f0c314-0384-474a-bc4a-33bb79b0198c" (UID: "a1f0c314-0384-474a-bc4a-33bb79b0198c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.968312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-config-data" (OuterVolumeSpecName: "config-data") pod "a1f0c314-0384-474a-bc4a-33bb79b0198c" (UID: "a1f0c314-0384-474a-bc4a-33bb79b0198c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.985954 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1f0c314-0384-474a-bc4a-33bb79b0198c" (UID: "a1f0c314-0384-474a-bc4a-33bb79b0198c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:28 crc kubenswrapper[4858]: I1205 14:19:28.998294 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a1f0c314-0384-474a-bc4a-33bb79b0198c" (UID: "a1f0c314-0384-474a-bc4a-33bb79b0198c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.031523 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.031554 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.031584 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.031594 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t45pt\" (UniqueName: \"kubernetes.io/projected/a1f0c314-0384-474a-bc4a-33bb79b0198c-kube-api-access-t45pt\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.031607 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f0c314-0384-474a-bc4a-33bb79b0198c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.097339 4858 generic.go:334] "Generic (PLEG): container finished" podID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerID="1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d" exitCode=0 Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.097383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1f0c314-0384-474a-bc4a-33bb79b0198c","Type":"ContainerDied","Data":"1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d"} Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.097426 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1f0c314-0384-474a-bc4a-33bb79b0198c","Type":"ContainerDied","Data":"4a5a7f0519720332cc5b20909f653929d2f790c0a90d498c6ca5346449ccbdd4"} Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.097469 4858 scope.go:117] "RemoveContainer" containerID="1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.097426 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.099634 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"88978087-6caa-487b-8425-40fc1b70ced8","Type":"ContainerStarted","Data":"98e32e4cd37774388d302a03f50f7d6df15b6ec8b14180345e146953fa48f951"} Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.129538 4858 scope.go:117] "RemoveContainer" containerID="6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.134178 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.134153044 podStartE2EDuration="2.134153044s" podCreationTimestamp="2025-12-05 14:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:29.125901754 +0000 UTC m=+1377.673499893" watchObservedRunningTime="2025-12-05 14:19:29.134153044 +0000 UTC m=+1377.681751183" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.153320 4858 scope.go:117] "RemoveContainer" containerID="1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.153429 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:29 crc kubenswrapper[4858]: E1205 14:19:29.153985 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d\": container with ID starting with 1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d not found: ID does not exist" containerID="1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.154014 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d"} err="failed to get container status \"1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d\": rpc error: code = NotFound desc = could not find container \"1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d\": container with ID starting with 1b7d65a7b737bb462d350c6232a2c3a7bc3397351f12d2af9a3158db0d72b46d not found: ID does not exist" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.154034 4858 scope.go:117] "RemoveContainer" containerID="6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964" Dec 05 14:19:29 crc kubenswrapper[4858]: E1205 14:19:29.154274 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964\": container with ID starting with 6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964 not found: ID does not exist" containerID="6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.154295 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964"} err="failed to get container status \"6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964\": rpc error: code = NotFound desc = could not find container \"6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964\": container with ID starting with 6dc3ef141853f02ff9091c31b3440622f1b99e99053ce0a5930be7c4978cb964 not found: ID does not exist" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.166018 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.176249 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:29 crc kubenswrapper[4858]: E1205 14:19:29.176698 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-log" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.176717 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-log" Dec 05 14:19:29 crc kubenswrapper[4858]: E1205 14:19:29.176739 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-api" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.176747 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-api" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.176946 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-api" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.176968 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" containerName="nova-api-log" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.177941 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.181022 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.181385 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.181561 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.186570 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.241816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/095593ae-70fd-489a-bfc1-47f9095b5598-logs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.241877 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-public-tls-certs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.241900 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4s7\" (UniqueName: \"kubernetes.io/projected/095593ae-70fd-489a-bfc1-47f9095b5598-kube-api-access-mc4s7\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.241986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.242058 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-config-data\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.242086 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-internal-tls-certs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.344393 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-config-data\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.344448 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-internal-tls-certs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.344477 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/095593ae-70fd-489a-bfc1-47f9095b5598-logs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.344498 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-public-tls-certs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.344517 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc4s7\" (UniqueName: \"kubernetes.io/projected/095593ae-70fd-489a-bfc1-47f9095b5598-kube-api-access-mc4s7\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.344601 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.345138 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/095593ae-70fd-489a-bfc1-47f9095b5598-logs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.347575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-public-tls-certs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.347712 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-internal-tls-certs\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.348369 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.353375 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/095593ae-70fd-489a-bfc1-47f9095b5598-config-data\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.359012 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc4s7\" (UniqueName: \"kubernetes.io/projected/095593ae-70fd-489a-bfc1-47f9095b5598-kube-api-access-mc4s7\") pod \"nova-api-0\" (UID: \"095593ae-70fd-489a-bfc1-47f9095b5598\") " pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.494266 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.911543 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f0c314-0384-474a-bc4a-33bb79b0198c" path="/var/lib/kubelet/pods/a1f0c314-0384-474a-bc4a-33bb79b0198c/volumes" Dec 05 14:19:29 crc kubenswrapper[4858]: I1205 14:19:29.951787 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 14:19:29 crc kubenswrapper[4858]: W1205 14:19:29.953581 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod095593ae_70fd_489a_bfc1_47f9095b5598.slice/crio-fee209f523b96c85620cc8b29c612e25e7b9e3979f46a8937649ad36408b24e2 WatchSource:0}: Error finding container fee209f523b96c85620cc8b29c612e25e7b9e3979f46a8937649ad36408b24e2: Status 404 returned error can't find the container with id fee209f523b96c85620cc8b29c612e25e7b9e3979f46a8937649ad36408b24e2 Dec 05 14:19:30 crc kubenswrapper[4858]: I1205 14:19:30.109934 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"095593ae-70fd-489a-bfc1-47f9095b5598","Type":"ContainerStarted","Data":"d2ba09fed16514b0008465845833470120def21901ad909e638d0695f54c5cb7"} Dec 05 14:19:30 crc kubenswrapper[4858]: I1205 14:19:30.110180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"095593ae-70fd-489a-bfc1-47f9095b5598","Type":"ContainerStarted","Data":"fee209f523b96c85620cc8b29c612e25e7b9e3979f46a8937649ad36408b24e2"} Dec 05 14:19:31 crc kubenswrapper[4858]: I1205 14:19:31.122397 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"095593ae-70fd-489a-bfc1-47f9095b5598","Type":"ContainerStarted","Data":"9ba10744e3fed2b2c6be36274f11961c895bc50786665f60c0ea4d46564fc5c0"} Dec 05 14:19:31 crc kubenswrapper[4858]: I1205 14:19:31.149161 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.149140505 podStartE2EDuration="2.149140505s" podCreationTimestamp="2025-12-05 14:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:19:31.138305376 +0000 UTC m=+1379.685903555" watchObservedRunningTime="2025-12-05 14:19:31.149140505 +0000 UTC m=+1379.696738644" Dec 05 14:19:32 crc kubenswrapper[4858]: I1205 14:19:32.472367 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 14:19:32 crc kubenswrapper[4858]: I1205 14:19:32.472672 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 14:19:33 crc kubenswrapper[4858]: I1205 14:19:33.496394 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 14:19:33 crc kubenswrapper[4858]: I1205 14:19:33.522005 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 14:19:34 crc kubenswrapper[4858]: I1205 14:19:34.169871 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 14:19:37 crc kubenswrapper[4858]: I1205 14:19:37.471977 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 14:19:37 crc kubenswrapper[4858]: I1205 14:19:37.472894 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 14:19:38 crc kubenswrapper[4858]: I1205 14:19:38.484998 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="88978087-6caa-487b-8425-40fc1b70ced8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:19:38 crc kubenswrapper[4858]: I1205 14:19:38.484998 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="88978087-6caa-487b-8425-40fc1b70ced8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:19:39 crc kubenswrapper[4858]: I1205 14:19:39.495511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:19:39 crc kubenswrapper[4858]: I1205 14:19:39.495602 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 14:19:40 crc kubenswrapper[4858]: I1205 14:19:40.508021 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="095593ae-70fd-489a-bfc1-47f9095b5598" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:19:40 crc kubenswrapper[4858]: I1205 14:19:40.508015 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="095593ae-70fd-489a-bfc1-47f9095b5598" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:19:43 crc kubenswrapper[4858]: I1205 14:19:43.297921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 14:19:47 crc kubenswrapper[4858]: I1205 14:19:47.478482 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 14:19:47 crc kubenswrapper[4858]: I1205 14:19:47.483643 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 14:19:47 crc kubenswrapper[4858]: I1205 14:19:47.488467 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 14:19:48 crc kubenswrapper[4858]: I1205 14:19:48.266398 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 14:19:49 crc kubenswrapper[4858]: I1205 14:19:49.503688 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 14:19:49 crc kubenswrapper[4858]: I1205 14:19:49.504511 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 14:19:49 crc kubenswrapper[4858]: I1205 14:19:49.509672 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 14:19:49 crc kubenswrapper[4858]: I1205 14:19:49.519613 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 14:19:50 crc kubenswrapper[4858]: I1205 14:19:50.280664 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 14:19:50 crc kubenswrapper[4858]: I1205 14:19:50.379461 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 14:20:00 crc kubenswrapper[4858]: I1205 14:19:59.999703 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:20:01 crc kubenswrapper[4858]: I1205 14:20:01.245968 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:20:04 crc kubenswrapper[4858]: I1205 14:20:04.963525 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="rabbitmq" containerID="cri-o://ba74b79e23b66a2518665a5b2a045ada00324e6a4f063010cb7b6f8eb0d76203" gracePeriod=604796 Dec 05 14:20:06 crc kubenswrapper[4858]: I1205 14:20:06.123811 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="rabbitmq" containerID="cri-o://b4f462209706ad933d22eba13ce317a196e3b5fa6757b7b067b49668ecaac734" gracePeriod=604796 Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.396636 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.482720 4858 generic.go:334] "Generic (PLEG): container finished" podID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerID="ba74b79e23b66a2518665a5b2a045ada00324e6a4f063010cb7b6f8eb0d76203" exitCode=0 Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.482755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d99fd616-b195-4da7-b7ac-99bed8479e36","Type":"ContainerDied","Data":"ba74b79e23b66a2518665a5b2a045ada00324e6a4f063010cb7b6f8eb0d76203"} Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.682403 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.819933 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874291 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-plugins-conf\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874333 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88vg4\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-kube-api-access-88vg4\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874378 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d99fd616-b195-4da7-b7ac-99bed8479e36-pod-info\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874394 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-config-data\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874471 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-erlang-cookie\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874517 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-plugins\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874533 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d99fd616-b195-4da7-b7ac-99bed8479e36-erlang-cookie-secret\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874583 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-tls\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874658 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-confd\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.874692 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-server-conf\") pod \"d99fd616-b195-4da7-b7ac-99bed8479e36\" (UID: \"d99fd616-b195-4da7-b7ac-99bed8479e36\") " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.876965 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.879338 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.883062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.883352 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.886869 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d99fd616-b195-4da7-b7ac-99bed8479e36-pod-info" (OuterVolumeSpecName: "pod-info") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.888467 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.912520 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99fd616-b195-4da7-b7ac-99bed8479e36-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.937262 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-kube-api-access-88vg4" (OuterVolumeSpecName: "kube-api-access-88vg4") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "kube-api-access-88vg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.980792 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-config-data" (OuterVolumeSpecName: "config-data") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.980812 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981118 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88vg4\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-kube-api-access-88vg4\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981192 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d99fd616-b195-4da7-b7ac-99bed8479e36-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981260 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981379 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981447 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d99fd616-b195-4da7-b7ac-99bed8479e36-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981517 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 05 14:20:11 crc kubenswrapper[4858]: I1205 14:20:11.981570 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.040992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-server-conf" (OuterVolumeSpecName: "server-conf") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.072333 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.089978 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.090194 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d99fd616-b195-4da7-b7ac-99bed8479e36-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.090250 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.196559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d99fd616-b195-4da7-b7ac-99bed8479e36" (UID: "d99fd616-b195-4da7-b7ac-99bed8479e36"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.294377 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d99fd616-b195-4da7-b7ac-99bed8479e36-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.504965 4858 generic.go:334] "Generic (PLEG): container finished" podID="96d65651-be4c-475d-b4dc-293f42b30e39" containerID="b4f462209706ad933d22eba13ce317a196e3b5fa6757b7b067b49668ecaac734" exitCode=0 Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.505077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96d65651-be4c-475d-b4dc-293f42b30e39","Type":"ContainerDied","Data":"b4f462209706ad933d22eba13ce317a196e3b5fa6757b7b067b49668ecaac734"} Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.509046 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d99fd616-b195-4da7-b7ac-99bed8479e36","Type":"ContainerDied","Data":"753c601f6aa088a114036a0237762a6955f8124efa4fd621af187c5e304f8a18"} Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.509091 4858 scope.go:117] "RemoveContainer" containerID="ba74b79e23b66a2518665a5b2a045ada00324e6a4f063010cb7b6f8eb0d76203" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.509252 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.567281 4858 scope.go:117] "RemoveContainer" containerID="08ffecd9cc7a71d82d3e6577739e4a4afe4fee77374116ce3b8137d81627385f" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.567431 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.575545 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.632359 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:20:12 crc kubenswrapper[4858]: E1205 14:20:12.633329 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="rabbitmq" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.633475 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="rabbitmq" Dec 05 14:20:12 crc kubenswrapper[4858]: E1205 14:20:12.637891 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="setup-container" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.638133 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="setup-container" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.638815 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" containerName="rabbitmq" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.649918 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.654413 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.655134 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mws78" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.655320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.656091 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.656320 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.677297 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.689160 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.689352 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.831841 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.831881 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.831915 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.831931 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.831955 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3612569-2315-45bd-afa3-bf77d6f40952-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.831986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.832013 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.832046 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-config-data\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.832066 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfjp6\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-kube-api-access-rfjp6\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.832118 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3612569-2315-45bd-afa3-bf77d6f40952-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.832138 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.892980 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934254 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-config-data\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfjp6\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-kube-api-access-rfjp6\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934363 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3612569-2315-45bd-afa3-bf77d6f40952-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934391 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934458 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934503 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934541 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3612569-2315-45bd-afa3-bf77d6f40952-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934568 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.934604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.935169 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-config-data\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.935440 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.936792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.936971 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.938495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d3612569-2315-45bd-afa3-bf77d6f40952-server-conf\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.938744 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.941759 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.948282 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d3612569-2315-45bd-afa3-bf77d6f40952-pod-info\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.948370 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d3612569-2315-45bd-afa3-bf77d6f40952-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.951451 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:12 crc kubenswrapper[4858]: I1205 14:20:12.975470 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfjp6\" (UniqueName: \"kubernetes.io/projected/d3612569-2315-45bd-afa3-bf77d6f40952-kube-api-access-rfjp6\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.007034 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-server-0\" (UID: \"d3612569-2315-45bd-afa3-bf77d6f40952\") " pod="openstack/rabbitmq-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.036732 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96d65651-be4c-475d-b4dc-293f42b30e39-erlang-cookie-secret\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.036834 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-confd\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037219 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037278 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-tls\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037345 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-config-data\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-erlang-cookie\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037398 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-plugins-conf\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037502 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-server-conf\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037544 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzv22\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-kube-api-access-mzv22\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037580 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-plugins\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.037602 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96d65651-be4c-475d-b4dc-293f42b30e39-pod-info\") pod \"96d65651-be4c-475d-b4dc-293f42b30e39\" (UID: \"96d65651-be4c-475d-b4dc-293f42b30e39\") " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.045488 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.048168 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.054649 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.059221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d65651-be4c-475d-b4dc-293f42b30e39-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.073307 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/96d65651-be4c-475d-b4dc-293f42b30e39-pod-info" (OuterVolumeSpecName: "pod-info") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.076199 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.077431 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.094385 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-kube-api-access-mzv22" (OuterVolumeSpecName: "kube-api-access-mzv22") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "kube-api-access-mzv22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.136764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-config-data" (OuterVolumeSpecName: "config-data") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140275 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140303 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140314 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140326 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140335 4858 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140343 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzv22\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-kube-api-access-mzv22\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140351 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140386 4858 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96d65651-be4c-475d-b4dc-293f42b30e39-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.140395 4858 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96d65651-be4c-475d-b4dc-293f42b30e39-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.161667 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-server-conf" (OuterVolumeSpecName: "server-conf") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.178969 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.242546 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.242580 4858 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96d65651-be4c-475d-b4dc-293f42b30e39-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.259857 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "96d65651-be4c-475d-b4dc-293f42b30e39" (UID: "96d65651-be4c-475d-b4dc-293f42b30e39"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.303560 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.345312 4858 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96d65651-be4c-475d-b4dc-293f42b30e39-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.543135 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"96d65651-be4c-475d-b4dc-293f42b30e39","Type":"ContainerDied","Data":"4af8e2d9d60a89a6f44a393e31b47ab5794adada5bb2fe67b1cae37debfb7d8f"} Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.543183 4858 scope.go:117] "RemoveContainer" containerID="b4f462209706ad933d22eba13ce317a196e3b5fa6757b7b067b49668ecaac734" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.543318 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.596321 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.618125 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.619515 4858 scope.go:117] "RemoveContainer" containerID="61be820f5d8a6be7f6e3cb724ea744ed88d63cbcb4c7adb651339c6612a8ed84" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.636635 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:20:13 crc kubenswrapper[4858]: E1205 14:20:13.637117 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="setup-container" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.637133 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="setup-container" Dec 05 14:20:13 crc kubenswrapper[4858]: E1205 14:20:13.637178 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="rabbitmq" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.637184 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="rabbitmq" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.637371 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" containerName="rabbitmq" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.638486 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.643183 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.643255 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.643381 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.643565 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.643923 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.644048 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.644597 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vvxs4" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.647810 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.754890 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755223 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755238 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755255 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755284 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk649\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-kube-api-access-rk649\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755322 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755338 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755364 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755415 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.755446 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.824167 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.856773 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.856876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.856951 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.856984 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.856999 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857015 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857031 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857059 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk649\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-kube-api-access-rk649\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857114 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857138 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.857891 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.859087 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.859426 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.859515 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.859944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.860274 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.862741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.862992 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.866983 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.867160 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.876285 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk649\" (UniqueName: \"kubernetes.io/projected/f62eddea-8efc-424d-bd1f-2b0b6ecd40af-kube-api-access-rk649\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.936814 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d65651-be4c-475d-b4dc-293f42b30e39" path="/var/lib/kubelet/pods/96d65651-be4c-475d-b4dc-293f42b30e39/volumes" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.938034 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99fd616-b195-4da7-b7ac-99bed8479e36" path="/var/lib/kubelet/pods/d99fd616-b195-4da7-b7ac-99bed8479e36/volumes" Dec 05 14:20:13 crc kubenswrapper[4858]: I1205 14:20:13.940849 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f62eddea-8efc-424d-bd1f-2b0b6ecd40af\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:14 crc kubenswrapper[4858]: I1205 14:20:14.006549 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:14 crc kubenswrapper[4858]: I1205 14:20:14.471586 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 14:20:14 crc kubenswrapper[4858]: I1205 14:20:14.555464 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f62eddea-8efc-424d-bd1f-2b0b6ecd40af","Type":"ContainerStarted","Data":"39a21015c81ce40ee7611b02994aede5165e9b5b27c3d73b9641e49cb63c6257"} Dec 05 14:20:14 crc kubenswrapper[4858]: I1205 14:20:14.556531 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3612569-2315-45bd-afa3-bf77d6f40952","Type":"ContainerStarted","Data":"82be8dabf72ea5c92fd6d1e01c5f17379b5f46542318343eb2297cbfba76b444"} Dec 05 14:20:14 crc kubenswrapper[4858]: I1205 14:20:14.759733 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:20:14 crc kubenswrapper[4858]: I1205 14:20:14.759799 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.107494 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699bfd68d9-nhmld"] Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.109263 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.111349 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.128452 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699bfd68d9-nhmld"] Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.226614 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9w28\" (UniqueName: \"kubernetes.io/projected/9b6af852-6c85-41ba-a41a-af2d3b211a99-kube-api-access-k9w28\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.226930 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.226958 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-svc\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.227014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-sb\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.227087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-config\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.227112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-nb\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.227163 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-swift-storage-0\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.328917 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-config\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.328971 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-nb\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329000 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-swift-storage-0\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9w28\" (UniqueName: \"kubernetes.io/projected/9b6af852-6c85-41ba-a41a-af2d3b211a99-kube-api-access-k9w28\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329127 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329148 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-svc\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329168 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-sb\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329928 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-swift-storage-0\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.329928 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-nb\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.330065 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-svc\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.330235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-sb\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.330252 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-openstack-edpm-ipam\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.331090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-config\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.350507 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9w28\" (UniqueName: \"kubernetes.io/projected/9b6af852-6c85-41ba-a41a-af2d3b211a99-kube-api-access-k9w28\") pod \"dnsmasq-dns-699bfd68d9-nhmld\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.427874 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.593482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3612569-2315-45bd-afa3-bf77d6f40952","Type":"ContainerStarted","Data":"b381af4a685bcd1e211faba57ebfdb4608b26c3579e2b79fd5cbab5961c06868"} Dec 05 14:20:16 crc kubenswrapper[4858]: I1205 14:20:16.930818 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699bfd68d9-nhmld"] Dec 05 14:20:16 crc kubenswrapper[4858]: W1205 14:20:16.937397 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b6af852_6c85_41ba_a41a_af2d3b211a99.slice/crio-b99fa82485f87e0a1e8de450e1556844295d38163ea1a86a09acee2150b86648 WatchSource:0}: Error finding container b99fa82485f87e0a1e8de450e1556844295d38163ea1a86a09acee2150b86648: Status 404 returned error can't find the container with id b99fa82485f87e0a1e8de450e1556844295d38163ea1a86a09acee2150b86648 Dec 05 14:20:17 crc kubenswrapper[4858]: I1205 14:20:17.603480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f62eddea-8efc-424d-bd1f-2b0b6ecd40af","Type":"ContainerStarted","Data":"a58f3811f26b2f5f71a4c3cca324e0fb33a2e5cef7f90575703b5ff820dc6288"} Dec 05 14:20:17 crc kubenswrapper[4858]: I1205 14:20:17.604961 4858 generic.go:334] "Generic (PLEG): container finished" podID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerID="a08bc348315f4bbf8fc9da55a32e6939c3d6d4cd23211ea32d884c62f53d2505" exitCode=0 Dec 05 14:20:17 crc kubenswrapper[4858]: I1205 14:20:17.605098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" event={"ID":"9b6af852-6c85-41ba-a41a-af2d3b211a99","Type":"ContainerDied","Data":"a08bc348315f4bbf8fc9da55a32e6939c3d6d4cd23211ea32d884c62f53d2505"} Dec 05 14:20:17 crc kubenswrapper[4858]: I1205 14:20:17.605127 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" event={"ID":"9b6af852-6c85-41ba-a41a-af2d3b211a99","Type":"ContainerStarted","Data":"b99fa82485f87e0a1e8de450e1556844295d38163ea1a86a09acee2150b86648"} Dec 05 14:20:18 crc kubenswrapper[4858]: I1205 14:20:18.615374 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" event={"ID":"9b6af852-6c85-41ba-a41a-af2d3b211a99","Type":"ContainerStarted","Data":"265bfc8d55c3a3c98b250cd9c20186fd5a711d335a2a653478ede58ae036a938"} Dec 05 14:20:18 crc kubenswrapper[4858]: I1205 14:20:18.640677 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" podStartSLOduration=2.640651214 podStartE2EDuration="2.640651214s" podCreationTimestamp="2025-12-05 14:20:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:20:18.63494066 +0000 UTC m=+1427.182538819" watchObservedRunningTime="2025-12-05 14:20:18.640651214 +0000 UTC m=+1427.188249353" Dec 05 14:20:19 crc kubenswrapper[4858]: I1205 14:20:19.622326 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:26 crc kubenswrapper[4858]: I1205 14:20:26.429095 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:26 crc kubenswrapper[4858]: I1205 14:20:26.518143 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f5fdccd57-tfmqv"] Dec 05 14:20:26 crc kubenswrapper[4858]: I1205 14:20:26.518635 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerName="dnsmasq-dns" containerID="cri-o://3145b7ac08465f5178ef031d2e0894b9a7cf69f9fafc96ddbe390e6089461bed" gracePeriod=10 Dec 05 14:20:26 crc kubenswrapper[4858]: I1205 14:20:26.968280 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fd677fbc9-tcqnm"] Dec 05 14:20:26 crc kubenswrapper[4858]: I1205 14:20:26.969985 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:26 crc kubenswrapper[4858]: I1205 14:20:26.999104 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd677fbc9-tcqnm"] Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.093932 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-ovsdbserver-sb\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.093986 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-ovsdbserver-nb\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.094014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-dns-swift-storage-0\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.094038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-openstack-edpm-ipam\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.094057 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-config\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.094117 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-dns-svc\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.094150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77cmg\" (UniqueName: \"kubernetes.io/projected/ab1678de-c74b-433f-8ebe-b164deb6a12d-kube-api-access-77cmg\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.195799 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-dns-svc\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.195874 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77cmg\" (UniqueName: \"kubernetes.io/projected/ab1678de-c74b-433f-8ebe-b164deb6a12d-kube-api-access-77cmg\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.195964 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-ovsdbserver-sb\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.195994 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-ovsdbserver-nb\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.196025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-dns-swift-storage-0\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.196047 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-openstack-edpm-ipam\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.196065 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-config\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.196993 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-config\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.197201 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-openstack-edpm-ipam\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.197306 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-dns-svc\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.197462 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-dns-swift-storage-0\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.197501 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-ovsdbserver-nb\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.197528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab1678de-c74b-433f-8ebe-b164deb6a12d-ovsdbserver-sb\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.219366 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77cmg\" (UniqueName: \"kubernetes.io/projected/ab1678de-c74b-433f-8ebe-b164deb6a12d-kube-api-access-77cmg\") pod \"dnsmasq-dns-fd677fbc9-tcqnm\" (UID: \"ab1678de-c74b-433f-8ebe-b164deb6a12d\") " pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.297337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.700725 4858 generic.go:334] "Generic (PLEG): container finished" podID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerID="3145b7ac08465f5178ef031d2e0894b9a7cf69f9fafc96ddbe390e6089461bed" exitCode=0 Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.700762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" event={"ID":"ee68dfcb-60c6-41ed-b575-4a0f01da7d50","Type":"ContainerDied","Data":"3145b7ac08465f5178ef031d2e0894b9a7cf69f9fafc96ddbe390e6089461bed"} Dec 05 14:20:27 crc kubenswrapper[4858]: W1205 14:20:27.904544 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1678de_c74b_433f_8ebe_b164deb6a12d.slice/crio-521b749b2d21f4bf076224f57245da718171c78326e5eaee4f017551ee2b82c6 WatchSource:0}: Error finding container 521b749b2d21f4bf076224f57245da718171c78326e5eaee4f017551ee2b82c6: Status 404 returned error can't find the container with id 521b749b2d21f4bf076224f57245da718171c78326e5eaee4f017551ee2b82c6 Dec 05 14:20:27 crc kubenswrapper[4858]: I1205 14:20:27.909436 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd677fbc9-tcqnm"] Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.070597 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.113154 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhxfv\" (UniqueName: \"kubernetes.io/projected/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-kube-api-access-vhxfv\") pod \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.113353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-config\") pod \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.113431 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-sb\") pod \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.113504 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-svc\") pod \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.113608 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-nb\") pod \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.113864 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-swift-storage-0\") pod \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\" (UID: \"ee68dfcb-60c6-41ed-b575-4a0f01da7d50\") " Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.118743 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-kube-api-access-vhxfv" (OuterVolumeSpecName: "kube-api-access-vhxfv") pod "ee68dfcb-60c6-41ed-b575-4a0f01da7d50" (UID: "ee68dfcb-60c6-41ed-b575-4a0f01da7d50"). InnerVolumeSpecName "kube-api-access-vhxfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.217304 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhxfv\" (UniqueName: \"kubernetes.io/projected/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-kube-api-access-vhxfv\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.235925 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ee68dfcb-60c6-41ed-b575-4a0f01da7d50" (UID: "ee68dfcb-60c6-41ed-b575-4a0f01da7d50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.256243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee68dfcb-60c6-41ed-b575-4a0f01da7d50" (UID: "ee68dfcb-60c6-41ed-b575-4a0f01da7d50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.264312 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ee68dfcb-60c6-41ed-b575-4a0f01da7d50" (UID: "ee68dfcb-60c6-41ed-b575-4a0f01da7d50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.266716 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-config" (OuterVolumeSpecName: "config") pod "ee68dfcb-60c6-41ed-b575-4a0f01da7d50" (UID: "ee68dfcb-60c6-41ed-b575-4a0f01da7d50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.268675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ee68dfcb-60c6-41ed-b575-4a0f01da7d50" (UID: "ee68dfcb-60c6-41ed-b575-4a0f01da7d50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.319477 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.319509 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.319520 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.319530 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.319538 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ee68dfcb-60c6-41ed-b575-4a0f01da7d50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.710480 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab1678de-c74b-433f-8ebe-b164deb6a12d" containerID="ace2e228d565be21768c72f7d4e1ba26788d5f7a7f8f4cdb24d5d58c73ae1be7" exitCode=0 Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.710558 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" event={"ID":"ab1678de-c74b-433f-8ebe-b164deb6a12d","Type":"ContainerDied","Data":"ace2e228d565be21768c72f7d4e1ba26788d5f7a7f8f4cdb24d5d58c73ae1be7"} Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.710590 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" event={"ID":"ab1678de-c74b-433f-8ebe-b164deb6a12d","Type":"ContainerStarted","Data":"521b749b2d21f4bf076224f57245da718171c78326e5eaee4f017551ee2b82c6"} Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.713389 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" event={"ID":"ee68dfcb-60c6-41ed-b575-4a0f01da7d50","Type":"ContainerDied","Data":"e935c5447ec1429be0ee5c0ce4ae18422250ea4196729f92fd005743dae38f61"} Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.713507 4858 scope.go:117] "RemoveContainer" containerID="3145b7ac08465f5178ef031d2e0894b9a7cf69f9fafc96ddbe390e6089461bed" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.713434 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f5fdccd57-tfmqv" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.753463 4858 scope.go:117] "RemoveContainer" containerID="ae61e037bb8d43ce1fc5787f8f869f0316308ebe04e7b802ff3b757eea2c0455" Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.770895 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f5fdccd57-tfmqv"] Dec 05 14:20:28 crc kubenswrapper[4858]: I1205 14:20:28.781106 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f5fdccd57-tfmqv"] Dec 05 14:20:29 crc kubenswrapper[4858]: I1205 14:20:29.724300 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" event={"ID":"ab1678de-c74b-433f-8ebe-b164deb6a12d","Type":"ContainerStarted","Data":"ed0dccc963392cc0043cf28c82e8e1d8402a91de85af0799378e6b680712cf36"} Dec 05 14:20:29 crc kubenswrapper[4858]: I1205 14:20:29.724639 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:29 crc kubenswrapper[4858]: I1205 14:20:29.751000 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" podStartSLOduration=3.750984249 podStartE2EDuration="3.750984249s" podCreationTimestamp="2025-12-05 14:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:20:29.745483501 +0000 UTC m=+1438.293081660" watchObservedRunningTime="2025-12-05 14:20:29.750984249 +0000 UTC m=+1438.298582388" Dec 05 14:20:29 crc kubenswrapper[4858]: I1205 14:20:29.909376 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" path="/var/lib/kubelet/pods/ee68dfcb-60c6-41ed-b575-4a0f01da7d50/volumes" Dec 05 14:20:37 crc kubenswrapper[4858]: I1205 14:20:37.299444 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fd677fbc9-tcqnm" Dec 05 14:20:37 crc kubenswrapper[4858]: I1205 14:20:37.359508 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699bfd68d9-nhmld"] Dec 05 14:20:37 crc kubenswrapper[4858]: I1205 14:20:37.359789 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerName="dnsmasq-dns" containerID="cri-o://265bfc8d55c3a3c98b250cd9c20186fd5a711d335a2a653478ede58ae036a938" gracePeriod=10 Dec 05 14:20:37 crc kubenswrapper[4858]: I1205 14:20:37.801932 4858 generic.go:334] "Generic (PLEG): container finished" podID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerID="265bfc8d55c3a3c98b250cd9c20186fd5a711d335a2a653478ede58ae036a938" exitCode=0 Dec 05 14:20:37 crc kubenswrapper[4858]: I1205 14:20:37.802146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" event={"ID":"9b6af852-6c85-41ba-a41a-af2d3b211a99","Type":"ContainerDied","Data":"265bfc8d55c3a3c98b250cd9c20186fd5a711d335a2a653478ede58ae036a938"} Dec 05 14:20:37 crc kubenswrapper[4858]: I1205 14:20:37.901417 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.009695 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-nb\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.009771 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-sb\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.009803 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-swift-storage-0\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.009841 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-svc\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.009994 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-openstack-edpm-ipam\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.010027 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-config\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.010131 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9w28\" (UniqueName: \"kubernetes.io/projected/9b6af852-6c85-41ba-a41a-af2d3b211a99-kube-api-access-k9w28\") pod \"9b6af852-6c85-41ba-a41a-af2d3b211a99\" (UID: \"9b6af852-6c85-41ba-a41a-af2d3b211a99\") " Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.015931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b6af852-6c85-41ba-a41a-af2d3b211a99-kube-api-access-k9w28" (OuterVolumeSpecName: "kube-api-access-k9w28") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "kube-api-access-k9w28". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.081491 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.112288 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9w28\" (UniqueName: \"kubernetes.io/projected/9b6af852-6c85-41ba-a41a-af2d3b211a99-kube-api-access-k9w28\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.112320 4858 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.129443 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.155548 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.165356 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.178279 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-config" (OuterVolumeSpecName: "config") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.203387 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9b6af852-6c85-41ba-a41a-af2d3b211a99" (UID: "9b6af852-6c85-41ba-a41a-af2d3b211a99"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.214045 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.214082 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-config\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.214091 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.214100 4858 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.214109 4858 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b6af852-6c85-41ba-a41a-af2d3b211a99-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.813847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" event={"ID":"9b6af852-6c85-41ba-a41a-af2d3b211a99","Type":"ContainerDied","Data":"b99fa82485f87e0a1e8de450e1556844295d38163ea1a86a09acee2150b86648"} Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.813966 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699bfd68d9-nhmld" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.814152 4858 scope.go:117] "RemoveContainer" containerID="265bfc8d55c3a3c98b250cd9c20186fd5a711d335a2a653478ede58ae036a938" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.837940 4858 scope.go:117] "RemoveContainer" containerID="a08bc348315f4bbf8fc9da55a32e6939c3d6d4cd23211ea32d884c62f53d2505" Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.866678 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699bfd68d9-nhmld"] Dec 05 14:20:38 crc kubenswrapper[4858]: I1205 14:20:38.881317 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699bfd68d9-nhmld"] Dec 05 14:20:39 crc kubenswrapper[4858]: I1205 14:20:39.910751 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" path="/var/lib/kubelet/pods/9b6af852-6c85-41ba-a41a-af2d3b211a99/volumes" Dec 05 14:20:44 crc kubenswrapper[4858]: I1205 14:20:44.759888 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:20:44 crc kubenswrapper[4858]: I1205 14:20:44.761068 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:20:47 crc kubenswrapper[4858]: I1205 14:20:47.907751 4858 generic.go:334] "Generic (PLEG): container finished" podID="d3612569-2315-45bd-afa3-bf77d6f40952" containerID="b381af4a685bcd1e211faba57ebfdb4608b26c3579e2b79fd5cbab5961c06868" exitCode=0 Dec 05 14:20:47 crc kubenswrapper[4858]: I1205 14:20:47.910418 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3612569-2315-45bd-afa3-bf77d6f40952","Type":"ContainerDied","Data":"b381af4a685bcd1e211faba57ebfdb4608b26c3579e2b79fd5cbab5961c06868"} Dec 05 14:20:48 crc kubenswrapper[4858]: I1205 14:20:48.919079 4858 generic.go:334] "Generic (PLEG): container finished" podID="f62eddea-8efc-424d-bd1f-2b0b6ecd40af" containerID="a58f3811f26b2f5f71a4c3cca324e0fb33a2e5cef7f90575703b5ff820dc6288" exitCode=0 Dec 05 14:20:48 crc kubenswrapper[4858]: I1205 14:20:48.919248 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f62eddea-8efc-424d-bd1f-2b0b6ecd40af","Type":"ContainerDied","Data":"a58f3811f26b2f5f71a4c3cca324e0fb33a2e5cef7f90575703b5ff820dc6288"} Dec 05 14:20:48 crc kubenswrapper[4858]: I1205 14:20:48.921725 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"d3612569-2315-45bd-afa3-bf77d6f40952","Type":"ContainerStarted","Data":"eaf8fee4a030dfd88693609410e682119e41160332f3f4054083fc5a44e4d722"} Dec 05 14:20:48 crc kubenswrapper[4858]: I1205 14:20:48.921955 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 14:20:49 crc kubenswrapper[4858]: I1205 14:20:49.007926 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.007902504 podStartE2EDuration="37.007902504s" podCreationTimestamp="2025-12-05 14:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:20:48.999141967 +0000 UTC m=+1457.546740126" watchObservedRunningTime="2025-12-05 14:20:49.007902504 +0000 UTC m=+1457.555500643" Dec 05 14:20:49 crc kubenswrapper[4858]: I1205 14:20:49.959444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f62eddea-8efc-424d-bd1f-2b0b6ecd40af","Type":"ContainerStarted","Data":"44effa6df8ad5a2fafea0dd71478a38c47ece0f5f74fa41fa6949722e5de6df0"} Dec 05 14:20:49 crc kubenswrapper[4858]: I1205 14:20:49.960007 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:20:49 crc kubenswrapper[4858]: I1205 14:20:49.994774 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.994753837 podStartE2EDuration="36.994753837s" podCreationTimestamp="2025-12-05 14:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:20:49.992168838 +0000 UTC m=+1458.539766987" watchObservedRunningTime="2025-12-05 14:20:49.994753837 +0000 UTC m=+1458.542351976" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.764012 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf"] Dec 05 14:20:55 crc kubenswrapper[4858]: E1205 14:20:55.764977 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerName="init" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.764993 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerName="init" Dec 05 14:20:55 crc kubenswrapper[4858]: E1205 14:20:55.765009 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerName="init" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.765017 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerName="init" Dec 05 14:20:55 crc kubenswrapper[4858]: E1205 14:20:55.765037 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerName="dnsmasq-dns" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.765046 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerName="dnsmasq-dns" Dec 05 14:20:55 crc kubenswrapper[4858]: E1205 14:20:55.765072 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerName="dnsmasq-dns" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.765079 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerName="dnsmasq-dns" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.765297 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b6af852-6c85-41ba-a41a-af2d3b211a99" containerName="dnsmasq-dns" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.765332 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee68dfcb-60c6-41ed-b575-4a0f01da7d50" containerName="dnsmasq-dns" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.766109 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.770723 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.770902 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.770928 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.771590 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.798419 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf"] Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.851587 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bx2h\" (UniqueName: \"kubernetes.io/projected/de8e802c-e2ed-4977-8e7d-7f13267c5e45-kube-api-access-4bx2h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.851657 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.851689 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.851854 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.952852 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bx2h\" (UniqueName: \"kubernetes.io/projected/de8e802c-e2ed-4977-8e7d-7f13267c5e45-kube-api-access-4bx2h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.953141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.953170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.953245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.971770 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:55 crc kubenswrapper[4858]: I1205 14:20:55.982433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:56 crc kubenswrapper[4858]: I1205 14:20:56.000433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:56 crc kubenswrapper[4858]: I1205 14:20:56.007382 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bx2h\" (UniqueName: \"kubernetes.io/projected/de8e802c-e2ed-4977-8e7d-7f13267c5e45-kube-api-access-4bx2h\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:56 crc kubenswrapper[4858]: I1205 14:20:56.102242 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:20:56 crc kubenswrapper[4858]: I1205 14:20:56.450957 4858 scope.go:117] "RemoveContainer" containerID="7a3e9621021bd52e7dd7b1554b8aadfcd9ad6136a7b3323b6189ac55d0c46516" Dec 05 14:20:56 crc kubenswrapper[4858]: I1205 14:20:56.816538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf"] Dec 05 14:20:57 crc kubenswrapper[4858]: I1205 14:20:57.053057 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" event={"ID":"de8e802c-e2ed-4977-8e7d-7f13267c5e45","Type":"ContainerStarted","Data":"1a68c63de6ebd0647a0f8ee1022b51f0956248a323a9f020f12282d6e6754727"} Dec 05 14:21:03 crc kubenswrapper[4858]: I1205 14:21:03.308050 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 14:21:04 crc kubenswrapper[4858]: I1205 14:21:04.008999 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 14:21:10 crc kubenswrapper[4858]: I1205 14:21:10.188600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" event={"ID":"de8e802c-e2ed-4977-8e7d-7f13267c5e45","Type":"ContainerStarted","Data":"646c5744f764d45f0eaa03a7ae6e2a5123f5a0975ea33d6a9e092d369cb9a395"} Dec 05 14:21:10 crc kubenswrapper[4858]: I1205 14:21:10.222116 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" podStartSLOduration=2.269571771 podStartE2EDuration="15.222090092s" podCreationTimestamp="2025-12-05 14:20:55 +0000 UTC" firstStartedPulling="2025-12-05 14:20:56.823570152 +0000 UTC m=+1465.371168291" lastFinishedPulling="2025-12-05 14:21:09.776088473 +0000 UTC m=+1478.323686612" observedRunningTime="2025-12-05 14:21:10.204095187 +0000 UTC m=+1478.751693346" watchObservedRunningTime="2025-12-05 14:21:10.222090092 +0000 UTC m=+1478.769688241" Dec 05 14:21:14 crc kubenswrapper[4858]: I1205 14:21:14.760163 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:21:14 crc kubenswrapper[4858]: I1205 14:21:14.760797 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:21:14 crc kubenswrapper[4858]: I1205 14:21:14.760885 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:21:14 crc kubenswrapper[4858]: I1205 14:21:14.761776 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8424605d2464ee3ef0a69ac56cbc16766cf5b070918dfe5d9640a4a043f1721"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:21:14 crc kubenswrapper[4858]: I1205 14:21:14.761898 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://b8424605d2464ee3ef0a69ac56cbc16766cf5b070918dfe5d9640a4a043f1721" gracePeriod=600 Dec 05 14:21:15 crc kubenswrapper[4858]: I1205 14:21:15.236037 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"b8424605d2464ee3ef0a69ac56cbc16766cf5b070918dfe5d9640a4a043f1721"} Dec 05 14:21:15 crc kubenswrapper[4858]: I1205 14:21:15.236099 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="b8424605d2464ee3ef0a69ac56cbc16766cf5b070918dfe5d9640a4a043f1721" exitCode=0 Dec 05 14:21:15 crc kubenswrapper[4858]: I1205 14:21:15.236344 4858 scope.go:117] "RemoveContainer" containerID="472064fae0079b1bc994525982e709b1ab2bd1dccaa9fb9d8e2cbb9dfa8c4695" Dec 05 14:21:15 crc kubenswrapper[4858]: I1205 14:21:15.236363 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65"} Dec 05 14:21:22 crc kubenswrapper[4858]: I1205 14:21:22.302197 4858 generic.go:334] "Generic (PLEG): container finished" podID="de8e802c-e2ed-4977-8e7d-7f13267c5e45" containerID="646c5744f764d45f0eaa03a7ae6e2a5123f5a0975ea33d6a9e092d369cb9a395" exitCode=0 Dec 05 14:21:22 crc kubenswrapper[4858]: I1205 14:21:22.302599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" event={"ID":"de8e802c-e2ed-4977-8e7d-7f13267c5e45","Type":"ContainerDied","Data":"646c5744f764d45f0eaa03a7ae6e2a5123f5a0975ea33d6a9e092d369cb9a395"} Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.725506 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.850722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-repo-setup-combined-ca-bundle\") pod \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.850854 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bx2h\" (UniqueName: \"kubernetes.io/projected/de8e802c-e2ed-4977-8e7d-7f13267c5e45-kube-api-access-4bx2h\") pod \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.850950 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-inventory\") pod \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.851001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-ssh-key\") pod \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\" (UID: \"de8e802c-e2ed-4977-8e7d-7f13267c5e45\") " Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.855873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "de8e802c-e2ed-4977-8e7d-7f13267c5e45" (UID: "de8e802c-e2ed-4977-8e7d-7f13267c5e45"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.856500 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8e802c-e2ed-4977-8e7d-7f13267c5e45-kube-api-access-4bx2h" (OuterVolumeSpecName: "kube-api-access-4bx2h") pod "de8e802c-e2ed-4977-8e7d-7f13267c5e45" (UID: "de8e802c-e2ed-4977-8e7d-7f13267c5e45"). InnerVolumeSpecName "kube-api-access-4bx2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.878795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "de8e802c-e2ed-4977-8e7d-7f13267c5e45" (UID: "de8e802c-e2ed-4977-8e7d-7f13267c5e45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.884617 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-inventory" (OuterVolumeSpecName: "inventory") pod "de8e802c-e2ed-4977-8e7d-7f13267c5e45" (UID: "de8e802c-e2ed-4977-8e7d-7f13267c5e45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.953246 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.953277 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.953289 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8e802c-e2ed-4977-8e7d-7f13267c5e45-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:23 crc kubenswrapper[4858]: I1205 14:21:23.953303 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bx2h\" (UniqueName: \"kubernetes.io/projected/de8e802c-e2ed-4977-8e7d-7f13267c5e45-kube-api-access-4bx2h\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.320313 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" event={"ID":"de8e802c-e2ed-4977-8e7d-7f13267c5e45","Type":"ContainerDied","Data":"1a68c63de6ebd0647a0f8ee1022b51f0956248a323a9f020f12282d6e6754727"} Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.320615 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a68c63de6ebd0647a0f8ee1022b51f0956248a323a9f020f12282d6e6754727" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.320417 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-dwpsf" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.402555 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd"] Dec 05 14:21:24 crc kubenswrapper[4858]: E1205 14:21:24.402966 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8e802c-e2ed-4977-8e7d-7f13267c5e45" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.402984 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8e802c-e2ed-4977-8e7d-7f13267c5e45" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.403187 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8e802c-e2ed-4977-8e7d-7f13267c5e45" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.403774 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.405695 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.405886 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.406097 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.406221 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.426442 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd"] Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.461895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.461962 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.462031 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdps\" (UniqueName: \"kubernetes.io/projected/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-kube-api-access-fcdps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.564098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.564199 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdps\" (UniqueName: \"kubernetes.io/projected/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-kube-api-access-fcdps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.564336 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.568551 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.568728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.584082 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdps\" (UniqueName: \"kubernetes.io/projected/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-kube-api-access-fcdps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-crfzd\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:24 crc kubenswrapper[4858]: I1205 14:21:24.759493 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:25 crc kubenswrapper[4858]: I1205 14:21:25.288154 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd"] Dec 05 14:21:25 crc kubenswrapper[4858]: I1205 14:21:25.334617 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" event={"ID":"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5","Type":"ContainerStarted","Data":"1ad7e1686a3bceb1f711e68f3704bf5201a81734ad28b7aee87f7d949119c04f"} Dec 05 14:21:26 crc kubenswrapper[4858]: I1205 14:21:26.345409 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" event={"ID":"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5","Type":"ContainerStarted","Data":"4b0310213d164630eb9c2fc35a410dfa5a1578226a943189bf999b2cad185e41"} Dec 05 14:21:26 crc kubenswrapper[4858]: I1205 14:21:26.362715 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" podStartSLOduration=1.9187412849999999 podStartE2EDuration="2.36269645s" podCreationTimestamp="2025-12-05 14:21:24 +0000 UTC" firstStartedPulling="2025-12-05 14:21:25.292997633 +0000 UTC m=+1493.840595772" lastFinishedPulling="2025-12-05 14:21:25.736952798 +0000 UTC m=+1494.284550937" observedRunningTime="2025-12-05 14:21:26.362016331 +0000 UTC m=+1494.909614470" watchObservedRunningTime="2025-12-05 14:21:26.36269645 +0000 UTC m=+1494.910294579" Dec 05 14:21:29 crc kubenswrapper[4858]: I1205 14:21:29.370981 4858 generic.go:334] "Generic (PLEG): container finished" podID="0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" containerID="4b0310213d164630eb9c2fc35a410dfa5a1578226a943189bf999b2cad185e41" exitCode=0 Dec 05 14:21:29 crc kubenswrapper[4858]: I1205 14:21:29.371064 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" event={"ID":"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5","Type":"ContainerDied","Data":"4b0310213d164630eb9c2fc35a410dfa5a1578226a943189bf999b2cad185e41"} Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.784839 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.878370 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-ssh-key\") pod \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.879177 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-inventory\") pod \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.879309 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcdps\" (UniqueName: \"kubernetes.io/projected/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-kube-api-access-fcdps\") pod \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\" (UID: \"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5\") " Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.889079 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-kube-api-access-fcdps" (OuterVolumeSpecName: "kube-api-access-fcdps") pod "0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" (UID: "0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5"). InnerVolumeSpecName "kube-api-access-fcdps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.911100 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" (UID: "0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.915855 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-inventory" (OuterVolumeSpecName: "inventory") pod "0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" (UID: "0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.981735 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.981776 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcdps\" (UniqueName: \"kubernetes.io/projected/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-kube-api-access-fcdps\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:30 crc kubenswrapper[4858]: I1205 14:21:30.981791 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.390769 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" event={"ID":"0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5","Type":"ContainerDied","Data":"1ad7e1686a3bceb1f711e68f3704bf5201a81734ad28b7aee87f7d949119c04f"} Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.390818 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad7e1686a3bceb1f711e68f3704bf5201a81734ad28b7aee87f7d949119c04f" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.390903 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-crfzd" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.549283 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p"] Dec 05 14:21:31 crc kubenswrapper[4858]: E1205 14:21:31.549755 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.549774 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.550023 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb23d37-cc99-4fbb-93f3-bfa23cc0dad5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.550952 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.554270 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.554579 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.554802 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.554951 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.618218 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p"] Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.628525 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.628837 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.629104 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.629225 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljsz\" (UniqueName: \"kubernetes.io/projected/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-kube-api-access-wljsz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.730891 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.730945 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.731054 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.731088 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljsz\" (UniqueName: \"kubernetes.io/projected/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-kube-api-access-wljsz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.738016 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.739488 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.740455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.750977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljsz\" (UniqueName: \"kubernetes.io/projected/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-kube-api-access-wljsz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.872241 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:21:31 crc kubenswrapper[4858]: I1205 14:21:31.881328 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:21:32 crc kubenswrapper[4858]: I1205 14:21:32.445733 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p"] Dec 05 14:21:32 crc kubenswrapper[4858]: W1205 14:21:32.448866 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0000bceb_8b33_414f_9d73_2e9b5c1edbfd.slice/crio-883201676ed81666f34837c3ca7b0abd6dede33fd0c035f3c67794d2a2ceb02b WatchSource:0}: Error finding container 883201676ed81666f34837c3ca7b0abd6dede33fd0c035f3c67794d2a2ceb02b: Status 404 returned error can't find the container with id 883201676ed81666f34837c3ca7b0abd6dede33fd0c035f3c67794d2a2ceb02b Dec 05 14:21:32 crc kubenswrapper[4858]: I1205 14:21:32.895771 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:21:33 crc kubenswrapper[4858]: I1205 14:21:33.410228 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" event={"ID":"0000bceb-8b33-414f-9d73-2e9b5c1edbfd","Type":"ContainerStarted","Data":"c830316b78c8f95f6ecf4c6d64f8d74b58afbcfc0c92c53beaa76fe02eb8ad6c"} Dec 05 14:21:33 crc kubenswrapper[4858]: I1205 14:21:33.410549 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" event={"ID":"0000bceb-8b33-414f-9d73-2e9b5c1edbfd","Type":"ContainerStarted","Data":"883201676ed81666f34837c3ca7b0abd6dede33fd0c035f3c67794d2a2ceb02b"} Dec 05 14:21:33 crc kubenswrapper[4858]: I1205 14:21:33.431380 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" podStartSLOduration=1.989793207 podStartE2EDuration="2.431362589s" podCreationTimestamp="2025-12-05 14:21:31 +0000 UTC" firstStartedPulling="2025-12-05 14:21:32.451160372 +0000 UTC m=+1500.998758511" lastFinishedPulling="2025-12-05 14:21:32.892729754 +0000 UTC m=+1501.440327893" observedRunningTime="2025-12-05 14:21:33.426253102 +0000 UTC m=+1501.973851261" watchObservedRunningTime="2025-12-05 14:21:33.431362589 +0000 UTC m=+1501.978960728" Dec 05 14:21:56 crc kubenswrapper[4858]: I1205 14:21:56.590044 4858 scope.go:117] "RemoveContainer" containerID="dbb82e89de717b88543f98ac96946accb295f41533bf00e984ef5a1cc5feaabd" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.222328 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dg5zf"] Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.225257 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.237694 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg5zf"] Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.365714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44j62\" (UniqueName: \"kubernetes.io/projected/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-kube-api-access-44j62\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.366163 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-catalog-content\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.366330 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-utilities\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.469066 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44j62\" (UniqueName: \"kubernetes.io/projected/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-kube-api-access-44j62\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.469362 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-catalog-content\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.469445 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-utilities\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.469847 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-catalog-content\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.469952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-utilities\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.498670 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44j62\" (UniqueName: \"kubernetes.io/projected/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-kube-api-access-44j62\") pod \"redhat-marketplace-dg5zf\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:43 crc kubenswrapper[4858]: I1205 14:22:43.546950 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:44 crc kubenswrapper[4858]: I1205 14:22:44.039038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg5zf"] Dec 05 14:22:44 crc kubenswrapper[4858]: W1205 14:22:44.046177 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb5d37c_14a8_40a1_9f73_9fba36a3def4.slice/crio-dc00fffe936ad4b2866decda79f45b32c550d7178f46814f74d41e7d95e53ec6 WatchSource:0}: Error finding container dc00fffe936ad4b2866decda79f45b32c550d7178f46814f74d41e7d95e53ec6: Status 404 returned error can't find the container with id dc00fffe936ad4b2866decda79f45b32c550d7178f46814f74d41e7d95e53ec6 Dec 05 14:22:44 crc kubenswrapper[4858]: I1205 14:22:44.092501 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerStarted","Data":"dc00fffe936ad4b2866decda79f45b32c550d7178f46814f74d41e7d95e53ec6"} Dec 05 14:22:45 crc kubenswrapper[4858]: I1205 14:22:45.101501 4858 generic.go:334] "Generic (PLEG): container finished" podID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerID="51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723" exitCode=0 Dec 05 14:22:45 crc kubenswrapper[4858]: I1205 14:22:45.101686 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerDied","Data":"51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723"} Dec 05 14:22:45 crc kubenswrapper[4858]: I1205 14:22:45.105171 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:22:46 crc kubenswrapper[4858]: I1205 14:22:46.112562 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerStarted","Data":"e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9"} Dec 05 14:22:47 crc kubenswrapper[4858]: I1205 14:22:47.123660 4858 generic.go:334] "Generic (PLEG): container finished" podID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerID="e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9" exitCode=0 Dec 05 14:22:47 crc kubenswrapper[4858]: I1205 14:22:47.123715 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerDied","Data":"e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9"} Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.020589 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v5cbm"] Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.023652 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.034712 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5cbm"] Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.134707 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerStarted","Data":"585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194"} Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.162124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-utilities\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.162326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpdzj\" (UniqueName: \"kubernetes.io/projected/11ae4a94-803b-44d2-8652-648c145210c3-kube-api-access-vpdzj\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.162872 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-catalog-content\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.164551 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dg5zf" podStartSLOduration=2.66530172 podStartE2EDuration="5.164533004s" podCreationTimestamp="2025-12-05 14:22:43 +0000 UTC" firstStartedPulling="2025-12-05 14:22:45.103314005 +0000 UTC m=+1573.650912134" lastFinishedPulling="2025-12-05 14:22:47.602545269 +0000 UTC m=+1576.150143418" observedRunningTime="2025-12-05 14:22:48.156069287 +0000 UTC m=+1576.703667426" watchObservedRunningTime="2025-12-05 14:22:48.164533004 +0000 UTC m=+1576.712131153" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.264870 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-utilities\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.265166 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpdzj\" (UniqueName: \"kubernetes.io/projected/11ae4a94-803b-44d2-8652-648c145210c3-kube-api-access-vpdzj\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.265344 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-catalog-content\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.266345 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-utilities\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.266605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-catalog-content\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.285255 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpdzj\" (UniqueName: \"kubernetes.io/projected/11ae4a94-803b-44d2-8652-648c145210c3-kube-api-access-vpdzj\") pod \"redhat-operators-v5cbm\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.344516 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:48 crc kubenswrapper[4858]: W1205 14:22:48.841184 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11ae4a94_803b_44d2_8652_648c145210c3.slice/crio-143d281762b8a128622a11ee660a21b9aa6ab317cfbdeab7c486025c38d14f65 WatchSource:0}: Error finding container 143d281762b8a128622a11ee660a21b9aa6ab317cfbdeab7c486025c38d14f65: Status 404 returned error can't find the container with id 143d281762b8a128622a11ee660a21b9aa6ab317cfbdeab7c486025c38d14f65 Dec 05 14:22:48 crc kubenswrapper[4858]: I1205 14:22:48.842046 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5cbm"] Dec 05 14:22:49 crc kubenswrapper[4858]: I1205 14:22:49.146421 4858 generic.go:334] "Generic (PLEG): container finished" podID="11ae4a94-803b-44d2-8652-648c145210c3" containerID="4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a" exitCode=0 Dec 05 14:22:49 crc kubenswrapper[4858]: I1205 14:22:49.146495 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerDied","Data":"4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a"} Dec 05 14:22:49 crc kubenswrapper[4858]: I1205 14:22:49.147083 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerStarted","Data":"143d281762b8a128622a11ee660a21b9aa6ab317cfbdeab7c486025c38d14f65"} Dec 05 14:22:50 crc kubenswrapper[4858]: I1205 14:22:50.165815 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerStarted","Data":"44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc"} Dec 05 14:22:53 crc kubenswrapper[4858]: I1205 14:22:53.547248 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:53 crc kubenswrapper[4858]: I1205 14:22:53.548631 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:53 crc kubenswrapper[4858]: I1205 14:22:53.596427 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:54 crc kubenswrapper[4858]: I1205 14:22:54.243311 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:54 crc kubenswrapper[4858]: I1205 14:22:54.792167 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg5zf"] Dec 05 14:22:55 crc kubenswrapper[4858]: I1205 14:22:55.215348 4858 generic.go:334] "Generic (PLEG): container finished" podID="11ae4a94-803b-44d2-8652-648c145210c3" containerID="44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc" exitCode=0 Dec 05 14:22:55 crc kubenswrapper[4858]: I1205 14:22:55.215412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerDied","Data":"44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc"} Dec 05 14:22:56 crc kubenswrapper[4858]: I1205 14:22:56.228198 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerStarted","Data":"fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b"} Dec 05 14:22:56 crc kubenswrapper[4858]: I1205 14:22:56.228428 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dg5zf" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="registry-server" containerID="cri-o://585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194" gracePeriod=2 Dec 05 14:22:56 crc kubenswrapper[4858]: I1205 14:22:56.262557 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v5cbm" podStartSLOduration=2.769793833 podStartE2EDuration="9.262535083s" podCreationTimestamp="2025-12-05 14:22:47 +0000 UTC" firstStartedPulling="2025-12-05 14:22:49.148270606 +0000 UTC m=+1577.695868745" lastFinishedPulling="2025-12-05 14:22:55.641011856 +0000 UTC m=+1584.188609995" observedRunningTime="2025-12-05 14:22:56.25347333 +0000 UTC m=+1584.801071469" watchObservedRunningTime="2025-12-05 14:22:56.262535083 +0000 UTC m=+1584.810133222" Dec 05 14:22:56 crc kubenswrapper[4858]: I1205 14:22:56.683251 4858 scope.go:117] "RemoveContainer" containerID="d36b6edcf130177e6b1ba93276b0d588277a4fe9d7d2c482ecd20ecf3f54bb18" Dec 05 14:22:56 crc kubenswrapper[4858]: I1205 14:22:56.712897 4858 scope.go:117] "RemoveContainer" containerID="928f601cb8d9654ee458318ba6765c0a9723b7b822ef2f70f97324bf36d8928f" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.208991 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.239999 4858 generic.go:334] "Generic (PLEG): container finished" podID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerID="585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194" exitCode=0 Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.240053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerDied","Data":"585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194"} Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.240078 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dg5zf" event={"ID":"fdb5d37c-14a8-40a1-9f73-9fba36a3def4","Type":"ContainerDied","Data":"dc00fffe936ad4b2866decda79f45b32c550d7178f46814f74d41e7d95e53ec6"} Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.240098 4858 scope.go:117] "RemoveContainer" containerID="585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.240228 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dg5zf" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.259515 4858 scope.go:117] "RemoveContainer" containerID="e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.305376 4858 scope.go:117] "RemoveContainer" containerID="51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.338673 4858 scope.go:117] "RemoveContainer" containerID="585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194" Dec 05 14:22:57 crc kubenswrapper[4858]: E1205 14:22:57.339114 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194\": container with ID starting with 585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194 not found: ID does not exist" containerID="585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.339143 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194"} err="failed to get container status \"585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194\": rpc error: code = NotFound desc = could not find container \"585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194\": container with ID starting with 585f92777ad530ed9797e918767f49ea5888783b504d1c6092806ad4eec37194 not found: ID does not exist" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.339163 4858 scope.go:117] "RemoveContainer" containerID="e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9" Dec 05 14:22:57 crc kubenswrapper[4858]: E1205 14:22:57.339676 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9\": container with ID starting with e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9 not found: ID does not exist" containerID="e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.339713 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9"} err="failed to get container status \"e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9\": rpc error: code = NotFound desc = could not find container \"e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9\": container with ID starting with e1715239ef7c5f5909a1ecaaefa7c2bb599fcf2dd83dff7e892c1a14f7f927a9 not found: ID does not exist" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.339727 4858 scope.go:117] "RemoveContainer" containerID="51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723" Dec 05 14:22:57 crc kubenswrapper[4858]: E1205 14:22:57.340086 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723\": container with ID starting with 51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723 not found: ID does not exist" containerID="51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.340110 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723"} err="failed to get container status \"51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723\": rpc error: code = NotFound desc = could not find container \"51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723\": container with ID starting with 51d00b641272d43e62751efd6b92d0baebe56d0227dc0053f164d4acfe194723 not found: ID does not exist" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.356297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-utilities\") pod \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.356509 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-catalog-content\") pod \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.356546 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44j62\" (UniqueName: \"kubernetes.io/projected/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-kube-api-access-44j62\") pod \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\" (UID: \"fdb5d37c-14a8-40a1-9f73-9fba36a3def4\") " Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.358369 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-utilities" (OuterVolumeSpecName: "utilities") pod "fdb5d37c-14a8-40a1-9f73-9fba36a3def4" (UID: "fdb5d37c-14a8-40a1-9f73-9fba36a3def4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.377060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdb5d37c-14a8-40a1-9f73-9fba36a3def4" (UID: "fdb5d37c-14a8-40a1-9f73-9fba36a3def4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.378970 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-kube-api-access-44j62" (OuterVolumeSpecName: "kube-api-access-44j62") pod "fdb5d37c-14a8-40a1-9f73-9fba36a3def4" (UID: "fdb5d37c-14a8-40a1-9f73-9fba36a3def4"). InnerVolumeSpecName "kube-api-access-44j62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.458239 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.458351 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44j62\" (UniqueName: \"kubernetes.io/projected/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-kube-api-access-44j62\") on node \"crc\" DevicePath \"\"" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.458362 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb5d37c-14a8-40a1-9f73-9fba36a3def4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.578271 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg5zf"] Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.587759 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dg5zf"] Dec 05 14:22:57 crc kubenswrapper[4858]: I1205 14:22:57.909257 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" path="/var/lib/kubelet/pods/fdb5d37c-14a8-40a1-9f73-9fba36a3def4/volumes" Dec 05 14:22:58 crc kubenswrapper[4858]: I1205 14:22:58.345429 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:58 crc kubenswrapper[4858]: I1205 14:22:58.345488 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:22:59 crc kubenswrapper[4858]: I1205 14:22:59.400997 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v5cbm" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="registry-server" probeResult="failure" output=< Dec 05 14:22:59 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:22:59 crc kubenswrapper[4858]: > Dec 05 14:23:08 crc kubenswrapper[4858]: I1205 14:23:08.415044 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:23:08 crc kubenswrapper[4858]: I1205 14:23:08.506468 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:23:08 crc kubenswrapper[4858]: I1205 14:23:08.665260 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5cbm"] Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.370425 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v5cbm" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="registry-server" containerID="cri-o://fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b" gracePeriod=2 Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.843041 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.943688 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-catalog-content\") pod \"11ae4a94-803b-44d2-8652-648c145210c3\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.943792 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpdzj\" (UniqueName: \"kubernetes.io/projected/11ae4a94-803b-44d2-8652-648c145210c3-kube-api-access-vpdzj\") pod \"11ae4a94-803b-44d2-8652-648c145210c3\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.943865 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-utilities\") pod \"11ae4a94-803b-44d2-8652-648c145210c3\" (UID: \"11ae4a94-803b-44d2-8652-648c145210c3\") " Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.946364 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-utilities" (OuterVolumeSpecName: "utilities") pod "11ae4a94-803b-44d2-8652-648c145210c3" (UID: "11ae4a94-803b-44d2-8652-648c145210c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:23:10 crc kubenswrapper[4858]: I1205 14:23:10.951345 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ae4a94-803b-44d2-8652-648c145210c3-kube-api-access-vpdzj" (OuterVolumeSpecName: "kube-api-access-vpdzj") pod "11ae4a94-803b-44d2-8652-648c145210c3" (UID: "11ae4a94-803b-44d2-8652-648c145210c3"). InnerVolumeSpecName "kube-api-access-vpdzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.048238 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpdzj\" (UniqueName: \"kubernetes.io/projected/11ae4a94-803b-44d2-8652-648c145210c3-kube-api-access-vpdzj\") on node \"crc\" DevicePath \"\"" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.048290 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.061941 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11ae4a94-803b-44d2-8652-648c145210c3" (UID: "11ae4a94-803b-44d2-8652-648c145210c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.150078 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11ae4a94-803b-44d2-8652-648c145210c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.404744 4858 generic.go:334] "Generic (PLEG): container finished" podID="11ae4a94-803b-44d2-8652-648c145210c3" containerID="fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b" exitCode=0 Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.404792 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerDied","Data":"fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b"} Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.405010 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5cbm" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.405244 4858 scope.go:117] "RemoveContainer" containerID="fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.405224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5cbm" event={"ID":"11ae4a94-803b-44d2-8652-648c145210c3","Type":"ContainerDied","Data":"143d281762b8a128622a11ee660a21b9aa6ab317cfbdeab7c486025c38d14f65"} Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.432476 4858 scope.go:117] "RemoveContainer" containerID="44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.468622 4858 scope.go:117] "RemoveContainer" containerID="4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.471554 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5cbm"] Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.479686 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v5cbm"] Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.519650 4858 scope.go:117] "RemoveContainer" containerID="fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b" Dec 05 14:23:11 crc kubenswrapper[4858]: E1205 14:23:11.520405 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b\": container with ID starting with fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b not found: ID does not exist" containerID="fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.520447 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b"} err="failed to get container status \"fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b\": rpc error: code = NotFound desc = could not find container \"fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b\": container with ID starting with fa2f957a17dc8656d7542660d4f109df705ee9ffb1d16ce7a6628b9845cfe21b not found: ID does not exist" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.520474 4858 scope.go:117] "RemoveContainer" containerID="44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc" Dec 05 14:23:11 crc kubenswrapper[4858]: E1205 14:23:11.521024 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc\": container with ID starting with 44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc not found: ID does not exist" containerID="44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.521048 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc"} err="failed to get container status \"44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc\": rpc error: code = NotFound desc = could not find container \"44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc\": container with ID starting with 44b9ab521f6be966f5f3044e12db3f882a43d0ef2c12f7eb75adf7cc8413efcc not found: ID does not exist" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.521068 4858 scope.go:117] "RemoveContainer" containerID="4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a" Dec 05 14:23:11 crc kubenswrapper[4858]: E1205 14:23:11.522720 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a\": container with ID starting with 4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a not found: ID does not exist" containerID="4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.522776 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a"} err="failed to get container status \"4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a\": rpc error: code = NotFound desc = could not find container \"4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a\": container with ID starting with 4c74f0ade7ac089077752a1548c127a704c1a31fb4eabb6a9f027053d8dc142a not found: ID does not exist" Dec 05 14:23:11 crc kubenswrapper[4858]: I1205 14:23:11.911633 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11ae4a94-803b-44d2-8652-648c145210c3" path="/var/lib/kubelet/pods/11ae4a94-803b-44d2-8652-648c145210c3/volumes" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.266842 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nfcrz"] Dec 05 14:23:28 crc kubenswrapper[4858]: E1205 14:23:28.267743 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="extract-content" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.267755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="extract-content" Dec 05 14:23:28 crc kubenswrapper[4858]: E1205 14:23:28.267769 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="extract-utilities" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.267776 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="extract-utilities" Dec 05 14:23:28 crc kubenswrapper[4858]: E1205 14:23:28.267793 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="registry-server" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.267798 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="registry-server" Dec 05 14:23:28 crc kubenswrapper[4858]: E1205 14:23:28.267811 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="extract-content" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.267816 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="extract-content" Dec 05 14:23:28 crc kubenswrapper[4858]: E1205 14:23:28.267846 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="extract-utilities" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.267853 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="extract-utilities" Dec 05 14:23:28 crc kubenswrapper[4858]: E1205 14:23:28.267864 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="registry-server" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.267870 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="registry-server" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.268053 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ae4a94-803b-44d2-8652-648c145210c3" containerName="registry-server" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.268074 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb5d37c-14a8-40a1-9f73-9fba36a3def4" containerName="registry-server" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.269405 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.289115 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfcrz"] Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.377694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dxf\" (UniqueName: \"kubernetes.io/projected/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-kube-api-access-66dxf\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.377970 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-catalog-content\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.378025 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-utilities\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.479488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-catalog-content\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.479859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-utilities\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.479998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66dxf\" (UniqueName: \"kubernetes.io/projected/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-kube-api-access-66dxf\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.480090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-catalog-content\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.480371 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-utilities\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.502728 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66dxf\" (UniqueName: \"kubernetes.io/projected/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-kube-api-access-66dxf\") pod \"community-operators-nfcrz\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:28 crc kubenswrapper[4858]: I1205 14:23:28.629136 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:29 crc kubenswrapper[4858]: I1205 14:23:29.095392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfcrz"] Dec 05 14:23:29 crc kubenswrapper[4858]: I1205 14:23:29.594634 4858 generic.go:334] "Generic (PLEG): container finished" podID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerID="812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f" exitCode=0 Dec 05 14:23:29 crc kubenswrapper[4858]: I1205 14:23:29.594930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerDied","Data":"812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f"} Dec 05 14:23:29 crc kubenswrapper[4858]: I1205 14:23:29.594961 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerStarted","Data":"2f1d789db9a0f43d51731e5db0ac627a3a15d97f908d4ea9ce56e3f06c801232"} Dec 05 14:23:30 crc kubenswrapper[4858]: I1205 14:23:30.607536 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerStarted","Data":"84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b"} Dec 05 14:23:31 crc kubenswrapper[4858]: I1205 14:23:31.621647 4858 generic.go:334] "Generic (PLEG): container finished" podID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerID="84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b" exitCode=0 Dec 05 14:23:31 crc kubenswrapper[4858]: I1205 14:23:31.621738 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerDied","Data":"84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b"} Dec 05 14:23:32 crc kubenswrapper[4858]: I1205 14:23:32.634557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerStarted","Data":"0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715"} Dec 05 14:23:32 crc kubenswrapper[4858]: I1205 14:23:32.656330 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nfcrz" podStartSLOduration=2.078684372 podStartE2EDuration="4.656308098s" podCreationTimestamp="2025-12-05 14:23:28 +0000 UTC" firstStartedPulling="2025-12-05 14:23:29.600045262 +0000 UTC m=+1618.147643401" lastFinishedPulling="2025-12-05 14:23:32.177668988 +0000 UTC m=+1620.725267127" observedRunningTime="2025-12-05 14:23:32.649526915 +0000 UTC m=+1621.197125044" watchObservedRunningTime="2025-12-05 14:23:32.656308098 +0000 UTC m=+1621.203906237" Dec 05 14:23:38 crc kubenswrapper[4858]: I1205 14:23:38.630655 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:38 crc kubenswrapper[4858]: I1205 14:23:38.631165 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:38 crc kubenswrapper[4858]: I1205 14:23:38.682680 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:38 crc kubenswrapper[4858]: I1205 14:23:38.788542 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:38 crc kubenswrapper[4858]: I1205 14:23:38.921332 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfcrz"] Dec 05 14:23:40 crc kubenswrapper[4858]: I1205 14:23:40.749338 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nfcrz" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="registry-server" containerID="cri-o://0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715" gracePeriod=2 Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.263578 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.453620 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-utilities\") pod \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.453706 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-catalog-content\") pod \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.453818 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66dxf\" (UniqueName: \"kubernetes.io/projected/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-kube-api-access-66dxf\") pod \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\" (UID: \"bbcd735f-a8d6-45e3-b6e8-949dd9b09446\") " Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.455727 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-utilities" (OuterVolumeSpecName: "utilities") pod "bbcd735f-a8d6-45e3-b6e8-949dd9b09446" (UID: "bbcd735f-a8d6-45e3-b6e8-949dd9b09446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.459775 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-kube-api-access-66dxf" (OuterVolumeSpecName: "kube-api-access-66dxf") pod "bbcd735f-a8d6-45e3-b6e8-949dd9b09446" (UID: "bbcd735f-a8d6-45e3-b6e8-949dd9b09446"). InnerVolumeSpecName "kube-api-access-66dxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.514786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbcd735f-a8d6-45e3-b6e8-949dd9b09446" (UID: "bbcd735f-a8d6-45e3-b6e8-949dd9b09446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.556315 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.556567 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.556653 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66dxf\" (UniqueName: \"kubernetes.io/projected/bbcd735f-a8d6-45e3-b6e8-949dd9b09446-kube-api-access-66dxf\") on node \"crc\" DevicePath \"\"" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.776047 4858 generic.go:334] "Generic (PLEG): container finished" podID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerID="0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715" exitCode=0 Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.776091 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerDied","Data":"0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715"} Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.776117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfcrz" event={"ID":"bbcd735f-a8d6-45e3-b6e8-949dd9b09446","Type":"ContainerDied","Data":"2f1d789db9a0f43d51731e5db0ac627a3a15d97f908d4ea9ce56e3f06c801232"} Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.776134 4858 scope.go:117] "RemoveContainer" containerID="0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.776290 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfcrz" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.837197 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfcrz"] Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.844658 4858 scope.go:117] "RemoveContainer" containerID="84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.850790 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nfcrz"] Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.874839 4858 scope.go:117] "RemoveContainer" containerID="812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.913321 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" path="/var/lib/kubelet/pods/bbcd735f-a8d6-45e3-b6e8-949dd9b09446/volumes" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.918731 4858 scope.go:117] "RemoveContainer" containerID="0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715" Dec 05 14:23:41 crc kubenswrapper[4858]: E1205 14:23:41.919139 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715\": container with ID starting with 0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715 not found: ID does not exist" containerID="0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.919183 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715"} err="failed to get container status \"0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715\": rpc error: code = NotFound desc = could not find container \"0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715\": container with ID starting with 0605e3a19e614e88d22f6379fcd3f4f35bc7928f47188bde59179b3df9c38715 not found: ID does not exist" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.919208 4858 scope.go:117] "RemoveContainer" containerID="84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b" Dec 05 14:23:41 crc kubenswrapper[4858]: E1205 14:23:41.919938 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b\": container with ID starting with 84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b not found: ID does not exist" containerID="84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.919969 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b"} err="failed to get container status \"84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b\": rpc error: code = NotFound desc = could not find container \"84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b\": container with ID starting with 84cf98a7c81fc9a59cb5ee20ed625afbf5b51c79baa64b6448e98f9274598f2b not found: ID does not exist" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.919989 4858 scope.go:117] "RemoveContainer" containerID="812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f" Dec 05 14:23:41 crc kubenswrapper[4858]: E1205 14:23:41.920242 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f\": container with ID starting with 812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f not found: ID does not exist" containerID="812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f" Dec 05 14:23:41 crc kubenswrapper[4858]: I1205 14:23:41.920263 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f"} err="failed to get container status \"812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f\": rpc error: code = NotFound desc = could not find container \"812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f\": container with ID starting with 812e9540c0d9d17eb68c3beb1a84f61ec76467a2942c620c5773175017f97a0f not found: ID does not exist" Dec 05 14:23:44 crc kubenswrapper[4858]: I1205 14:23:44.760441 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:23:44 crc kubenswrapper[4858]: I1205 14:23:44.760812 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:24:09 crc kubenswrapper[4858]: I1205 14:24:09.039646 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5lcmn"] Dec 05 14:24:09 crc kubenswrapper[4858]: I1205 14:24:09.051761 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5lcmn"] Dec 05 14:24:09 crc kubenswrapper[4858]: I1205 14:24:09.912340 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ca40e9-5047-4404-a875-cae910187c3b" path="/var/lib/kubelet/pods/77ca40e9-5047-4404-a875-cae910187c3b/volumes" Dec 05 14:24:10 crc kubenswrapper[4858]: I1205 14:24:10.034103 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ba7f-account-create-update-56thq"] Dec 05 14:24:10 crc kubenswrapper[4858]: I1205 14:24:10.045058 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ba7f-account-create-update-56thq"] Dec 05 14:24:11 crc kubenswrapper[4858]: I1205 14:24:11.909510 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf192c4-1689-4d05-8653-7841a5dbbdd0" path="/var/lib/kubelet/pods/8bf192c4-1689-4d05-8653-7841a5dbbdd0/volumes" Dec 05 14:24:14 crc kubenswrapper[4858]: I1205 14:24:14.760511 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:24:14 crc kubenswrapper[4858]: I1205 14:24:14.761158 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.034927 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wj5nl"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.044413 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wj5nl"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.055458 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-266f-account-create-update-dhqgj"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.069420 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7d17-account-create-update-pjgn4"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.077618 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5lwkq"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.085987 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-266f-account-create-update-dhqgj"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.094611 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5lwkq"] Dec 05 14:24:26 crc kubenswrapper[4858]: I1205 14:24:26.101970 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7d17-account-create-update-pjgn4"] Dec 05 14:24:27 crc kubenswrapper[4858]: I1205 14:24:27.909863 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f8533f-a5fe-4af0-98db-eb1cc52e7b0c" path="/var/lib/kubelet/pods/50f8533f-a5fe-4af0-98db-eb1cc52e7b0c/volumes" Dec 05 14:24:27 crc kubenswrapper[4858]: I1205 14:24:27.910898 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a08cdd-eca5-4352-bdb6-fa27c4c2c317" path="/var/lib/kubelet/pods/83a08cdd-eca5-4352-bdb6-fa27c4c2c317/volumes" Dec 05 14:24:27 crc kubenswrapper[4858]: I1205 14:24:27.911760 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28ac28e-619d-499c-bc7a-4baa5f06abe9" path="/var/lib/kubelet/pods/b28ac28e-619d-499c-bc7a-4baa5f06abe9/volumes" Dec 05 14:24:27 crc kubenswrapper[4858]: I1205 14:24:27.912330 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53dc11c-7183-4492-879b-ed0d2ca99c18" path="/var/lib/kubelet/pods/e53dc11c-7183-4492-879b-ed0d2ca99c18/volumes" Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.041199 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cl2mg"] Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.052422 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cl2mg"] Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.760880 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.760956 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.761013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.762067 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:24:44 crc kubenswrapper[4858]: I1205 14:24:44.762146 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" gracePeriod=600 Dec 05 14:24:45 crc kubenswrapper[4858]: I1205 14:24:45.912710 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48dfcb42-ecb6-463d-9e5f-ddbf758dfee3" path="/var/lib/kubelet/pods/48dfcb42-ecb6-463d-9e5f-ddbf758dfee3/volumes" Dec 05 14:24:46 crc kubenswrapper[4858]: E1205 14:24:46.252483 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:24:46 crc kubenswrapper[4858]: I1205 14:24:46.339469 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" exitCode=0 Dec 05 14:24:46 crc kubenswrapper[4858]: I1205 14:24:46.339526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65"} Dec 05 14:24:46 crc kubenswrapper[4858]: I1205 14:24:46.339559 4858 scope.go:117] "RemoveContainer" containerID="b8424605d2464ee3ef0a69ac56cbc16766cf5b070918dfe5d9640a4a043f1721" Dec 05 14:24:46 crc kubenswrapper[4858]: I1205 14:24:46.340280 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:24:46 crc kubenswrapper[4858]: E1205 14:24:46.340519 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.042031 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mghrf"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.054782 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c598-account-create-update-7mdk8"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.067493 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-dfcd-account-create-update-5t722"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.078353 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-qh9gh"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.119504 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mghrf"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.128514 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-446f-account-create-update-tmxrf"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.136951 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-dfcd-account-create-update-5t722"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.144805 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c598-account-create-update-7mdk8"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.153410 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-qh9gh"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.161519 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-446f-account-create-update-tmxrf"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.169170 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-t4rpv"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.178072 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-t4rpv"] Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.909243 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="479f0846-6832-4c62-9791-cde613d23000" path="/var/lib/kubelet/pods/479f0846-6832-4c62-9791-cde613d23000/volumes" Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.911572 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b28c893-a052-4412-8f85-112a1cd06861" path="/var/lib/kubelet/pods/5b28c893-a052-4412-8f85-112a1cd06861/volumes" Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.912462 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fbb7f6b-3583-45c9-bac1-08b968e84700" path="/var/lib/kubelet/pods/6fbb7f6b-3583-45c9-bac1-08b968e84700/volumes" Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.913785 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768a8643-81f7-42cf-a720-3e5daed8bba6" path="/var/lib/kubelet/pods/768a8643-81f7-42cf-a720-3e5daed8bba6/volumes" Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.915781 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0ff391-7201-49f7-be8b-21d096449ae7" path="/var/lib/kubelet/pods/7d0ff391-7201-49f7-be8b-21d096449ae7/volumes" Dec 05 14:24:47 crc kubenswrapper[4858]: I1205 14:24:47.916776 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ef5601-b86a-456e-bad7-e713c17fa711" path="/var/lib/kubelet/pods/85ef5601-b86a-456e-bad7-e713c17fa711/volumes" Dec 05 14:24:48 crc kubenswrapper[4858]: I1205 14:24:48.033901 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-84sxb"] Dec 05 14:24:48 crc kubenswrapper[4858]: I1205 14:24:48.046245 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-84sxb"] Dec 05 14:24:48 crc kubenswrapper[4858]: I1205 14:24:48.055160 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-9d42-account-create-update-272c8"] Dec 05 14:24:48 crc kubenswrapper[4858]: I1205 14:24:48.062871 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-9d42-account-create-update-272c8"] Dec 05 14:24:49 crc kubenswrapper[4858]: I1205 14:24:49.393732 4858 generic.go:334] "Generic (PLEG): container finished" podID="0000bceb-8b33-414f-9d73-2e9b5c1edbfd" containerID="c830316b78c8f95f6ecf4c6d64f8d74b58afbcfc0c92c53beaa76fe02eb8ad6c" exitCode=0 Dec 05 14:24:49 crc kubenswrapper[4858]: I1205 14:24:49.393800 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" event={"ID":"0000bceb-8b33-414f-9d73-2e9b5c1edbfd","Type":"ContainerDied","Data":"c830316b78c8f95f6ecf4c6d64f8d74b58afbcfc0c92c53beaa76fe02eb8ad6c"} Dec 05 14:24:49 crc kubenswrapper[4858]: I1205 14:24:49.910222 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e3b03e-1157-4dfe-b594-57b16e70243a" path="/var/lib/kubelet/pods/62e3b03e-1157-4dfe-b594-57b16e70243a/volumes" Dec 05 14:24:49 crc kubenswrapper[4858]: I1205 14:24:49.911498 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1c7cb2-81f8-483a-8abe-2c8f3968ad77" path="/var/lib/kubelet/pods/be1c7cb2-81f8-483a-8abe-2c8f3968ad77/volumes" Dec 05 14:24:50 crc kubenswrapper[4858]: I1205 14:24:50.844616 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.001402 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljsz\" (UniqueName: \"kubernetes.io/projected/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-kube-api-access-wljsz\") pod \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.001506 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-bootstrap-combined-ca-bundle\") pod \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.001599 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-ssh-key\") pod \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.001677 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-inventory\") pod \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\" (UID: \"0000bceb-8b33-414f-9d73-2e9b5c1edbfd\") " Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.006695 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-kube-api-access-wljsz" (OuterVolumeSpecName: "kube-api-access-wljsz") pod "0000bceb-8b33-414f-9d73-2e9b5c1edbfd" (UID: "0000bceb-8b33-414f-9d73-2e9b5c1edbfd"). InnerVolumeSpecName "kube-api-access-wljsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.013149 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0000bceb-8b33-414f-9d73-2e9b5c1edbfd" (UID: "0000bceb-8b33-414f-9d73-2e9b5c1edbfd"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.032621 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-inventory" (OuterVolumeSpecName: "inventory") pod "0000bceb-8b33-414f-9d73-2e9b5c1edbfd" (UID: "0000bceb-8b33-414f-9d73-2e9b5c1edbfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.033816 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0000bceb-8b33-414f-9d73-2e9b5c1edbfd" (UID: "0000bceb-8b33-414f-9d73-2e9b5c1edbfd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.103514 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.103547 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wljsz\" (UniqueName: \"kubernetes.io/projected/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-kube-api-access-wljsz\") on node \"crc\" DevicePath \"\"" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.103558 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.103567 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0000bceb-8b33-414f-9d73-2e9b5c1edbfd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.411509 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" event={"ID":"0000bceb-8b33-414f-9d73-2e9b5c1edbfd","Type":"ContainerDied","Data":"883201676ed81666f34837c3ca7b0abd6dede33fd0c035f3c67794d2a2ceb02b"} Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.411561 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="883201676ed81666f34837c3ca7b0abd6dede33fd0c035f3c67794d2a2ceb02b" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.411586 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-h6w5p" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.521707 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c"] Dec 05 14:24:51 crc kubenswrapper[4858]: E1205 14:24:51.522181 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="extract-content" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.522204 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="extract-content" Dec 05 14:24:51 crc kubenswrapper[4858]: E1205 14:24:51.522243 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="registry-server" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.522251 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="registry-server" Dec 05 14:24:51 crc kubenswrapper[4858]: E1205 14:24:51.522264 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0000bceb-8b33-414f-9d73-2e9b5c1edbfd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.522272 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0000bceb-8b33-414f-9d73-2e9b5c1edbfd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 14:24:51 crc kubenswrapper[4858]: E1205 14:24:51.522320 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="extract-utilities" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.522327 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="extract-utilities" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.522530 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcd735f-a8d6-45e3-b6e8-949dd9b09446" containerName="registry-server" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.522551 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0000bceb-8b33-414f-9d73-2e9b5c1edbfd" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.523283 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.525266 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.525551 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.525812 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.526086 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.536521 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c"] Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.612709 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.613054 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxvf\" (UniqueName: \"kubernetes.io/projected/6bfc7ad2-f490-4415-944c-43ef46ae66ce-kube-api-access-jjxvf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.613140 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.714948 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.715029 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.715125 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxvf\" (UniqueName: \"kubernetes.io/projected/6bfc7ad2-f490-4415-944c-43ef46ae66ce-kube-api-access-jjxvf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.718648 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.718692 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.731880 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxvf\" (UniqueName: \"kubernetes.io/projected/6bfc7ad2-f490-4415-944c-43ef46ae66ce-kube-api-access-jjxvf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xg49c\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:51 crc kubenswrapper[4858]: I1205 14:24:51.845227 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:24:52 crc kubenswrapper[4858]: I1205 14:24:52.476518 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c"] Dec 05 14:24:53 crc kubenswrapper[4858]: I1205 14:24:53.426804 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" event={"ID":"6bfc7ad2-f490-4415-944c-43ef46ae66ce","Type":"ContainerStarted","Data":"3294df97fa096befa03ef5acec9685bdd610f12ce3f9b1c8147eeec138164d66"} Dec 05 14:24:55 crc kubenswrapper[4858]: I1205 14:24:55.457170 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" event={"ID":"6bfc7ad2-f490-4415-944c-43ef46ae66ce","Type":"ContainerStarted","Data":"cb1e86d25f7531988e63a33de4d46651ca1d3ba45f4a316565cb5ff576e03600"} Dec 05 14:24:55 crc kubenswrapper[4858]: I1205 14:24:55.491933 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" podStartSLOduration=2.759788522 podStartE2EDuration="4.491914087s" podCreationTimestamp="2025-12-05 14:24:51 +0000 UTC" firstStartedPulling="2025-12-05 14:24:52.496669712 +0000 UTC m=+1701.044267851" lastFinishedPulling="2025-12-05 14:24:54.228795277 +0000 UTC m=+1702.776393416" observedRunningTime="2025-12-05 14:24:55.484333432 +0000 UTC m=+1704.031931571" watchObservedRunningTime="2025-12-05 14:24:55.491914087 +0000 UTC m=+1704.039512216" Dec 05 14:24:56 crc kubenswrapper[4858]: I1205 14:24:56.870226 4858 scope.go:117] "RemoveContainer" containerID="f61829d7f9dfcbb3a4fdb6930f130fff9260df20125133b0154454d503a3030f" Dec 05 14:24:56 crc kubenswrapper[4858]: I1205 14:24:56.905987 4858 scope.go:117] "RemoveContainer" containerID="175cf61c7405767871eead7eb4f9c559d8721695dc1e54e9a9abc8d198c95d68" Dec 05 14:24:56 crc kubenswrapper[4858]: I1205 14:24:56.930677 4858 scope.go:117] "RemoveContainer" containerID="be1fcccf413fbaec45e43f5648772f93e33c872411abed6b2257725101eeded0" Dec 05 14:24:56 crc kubenswrapper[4858]: I1205 14:24:56.986509 4858 scope.go:117] "RemoveContainer" containerID="c9442e63f0c5957159579b6d4fffcb73ffdc6327bf09c7cc0559031c8d017720" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.043115 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6n4wj"] Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.052042 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6n4wj"] Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.062368 4858 scope.go:117] "RemoveContainer" containerID="155a2abcbc9e9cf802ea721aed75d24a1b579285aa9e9675635c2c00f6d2dc28" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.099083 4858 scope.go:117] "RemoveContainer" containerID="29150479d981ad9c9fe934fb2564200e1c4d615d1cbe5cba0b9a795894015b36" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.136710 4858 scope.go:117] "RemoveContainer" containerID="eab931537e77eef25f737906aa0df423f1c7640efb1c1bebc51e9f3434001c75" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.178235 4858 scope.go:117] "RemoveContainer" containerID="e2625d9407e758869df51d6cde3d25b335ed5c108cca30e228420e67d53e6ca6" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.222290 4858 scope.go:117] "RemoveContainer" containerID="d130387f25faf6d27b9c3053479efd55a3df45f009bc151467e65137b8b82b79" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.257105 4858 scope.go:117] "RemoveContainer" containerID="983b4227b1a3b4fa005273f71c6fad6c6a4ca2710332e045017479be3969dacc" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.290092 4858 scope.go:117] "RemoveContainer" containerID="d911c6acd7b15e00234f14117a3a832b0fc5c1ccbd3e50360ad526fc0348a28e" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.321331 4858 scope.go:117] "RemoveContainer" containerID="d8b8f7b4376a7cca3edb0cfa4c554f05adf00af8e3560915d85f6f206307b004" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.420507 4858 scope.go:117] "RemoveContainer" containerID="4e96b9e2dfe266ec6a59dc053f704e0d89c44dcf982d6f93f4d2d06706c22626" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.440248 4858 scope.go:117] "RemoveContainer" containerID="49ba9d55564eb329918f6d4ea4f3da881a2e3aed307cff1cbe6890d75ba10461" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.463908 4858 scope.go:117] "RemoveContainer" containerID="bfb73b62fd19ecd260ff9de5e818be4539924e0f8e1f692d0bd699c0c50f1b9f" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.492356 4858 scope.go:117] "RemoveContainer" containerID="97326f483f5c296056b2089e182b32ff63bd7f519d9cf5bd90353684880af84d" Dec 05 14:24:57 crc kubenswrapper[4858]: I1205 14:24:57.909735 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5aace7-7479-454e-b9c3-c83f492b0786" path="/var/lib/kubelet/pods/5f5aace7-7479-454e-b9c3-c83f492b0786/volumes" Dec 05 14:24:59 crc kubenswrapper[4858]: I1205 14:24:59.899592 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:24:59 crc kubenswrapper[4858]: E1205 14:24:59.899811 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:25:14 crc kubenswrapper[4858]: I1205 14:25:14.899582 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:25:14 crc kubenswrapper[4858]: E1205 14:25:14.900342 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:25:28 crc kubenswrapper[4858]: I1205 14:25:28.900430 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:25:28 crc kubenswrapper[4858]: E1205 14:25:28.901787 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:25:41 crc kubenswrapper[4858]: I1205 14:25:41.909668 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:25:41 crc kubenswrapper[4858]: E1205 14:25:41.913134 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:25:51 crc kubenswrapper[4858]: I1205 14:25:51.070623 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-fp96h"] Dec 05 14:25:51 crc kubenswrapper[4858]: I1205 14:25:51.087209 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-fp96h"] Dec 05 14:25:51 crc kubenswrapper[4858]: I1205 14:25:51.911984 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11e2282-12af-4a8d-8f16-eab320d07d4e" path="/var/lib/kubelet/pods/f11e2282-12af-4a8d-8f16-eab320d07d4e/volumes" Dec 05 14:25:52 crc kubenswrapper[4858]: I1205 14:25:52.900714 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:25:52 crc kubenswrapper[4858]: E1205 14:25:52.902369 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:25:57 crc kubenswrapper[4858]: I1205 14:25:57.779603 4858 scope.go:117] "RemoveContainer" containerID="a367c902d2b57ae002427b5fe377ba1ca8489d79024410aee8b85b0e36323201" Dec 05 14:25:57 crc kubenswrapper[4858]: I1205 14:25:57.806069 4858 scope.go:117] "RemoveContainer" containerID="e758f9573494956522352e0feafda2d1e9cfbd869deec084d8d4586f528c2e50" Dec 05 14:25:58 crc kubenswrapper[4858]: I1205 14:25:58.044813 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s8q57"] Dec 05 14:25:58 crc kubenswrapper[4858]: I1205 14:25:58.059538 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h8ccs"] Dec 05 14:25:58 crc kubenswrapper[4858]: I1205 14:25:58.068983 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s8q57"] Dec 05 14:25:58 crc kubenswrapper[4858]: I1205 14:25:58.078005 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h8ccs"] Dec 05 14:25:59 crc kubenswrapper[4858]: I1205 14:25:59.916061 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd10daa-322e-4445-9671-d50447afa9d7" path="/var/lib/kubelet/pods/1fd10daa-322e-4445-9671-d50447afa9d7/volumes" Dec 05 14:25:59 crc kubenswrapper[4858]: I1205 14:25:59.918322 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8c113e-5e71-4e4f-a8c7-66caea8a6068" path="/var/lib/kubelet/pods/9f8c113e-5e71-4e4f-a8c7-66caea8a6068/volumes" Dec 05 14:26:00 crc kubenswrapper[4858]: I1205 14:26:00.024053 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-5f99f"] Dec 05 14:26:00 crc kubenswrapper[4858]: I1205 14:26:00.032775 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-5f99f"] Dec 05 14:26:01 crc kubenswrapper[4858]: I1205 14:26:01.915075 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945b1178-6672-45ba-bee9-335d1a2fec5c" path="/var/lib/kubelet/pods/945b1178-6672-45ba-bee9-335d1a2fec5c/volumes" Dec 05 14:26:06 crc kubenswrapper[4858]: I1205 14:26:06.899681 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:26:06 crc kubenswrapper[4858]: E1205 14:26:06.900418 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:26:18 crc kubenswrapper[4858]: I1205 14:26:18.900150 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:26:18 crc kubenswrapper[4858]: E1205 14:26:18.900987 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:26:20 crc kubenswrapper[4858]: I1205 14:26:20.047699 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-fbkbh"] Dec 05 14:26:20 crc kubenswrapper[4858]: I1205 14:26:20.058090 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-glkkv"] Dec 05 14:26:20 crc kubenswrapper[4858]: I1205 14:26:20.075841 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-fbkbh"] Dec 05 14:26:20 crc kubenswrapper[4858]: I1205 14:26:20.075921 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-glkkv"] Dec 05 14:26:21 crc kubenswrapper[4858]: I1205 14:26:21.911566 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be96efe-970b-4639-8744-3e63a0abfbd6" path="/var/lib/kubelet/pods/9be96efe-970b-4639-8744-3e63a0abfbd6/volumes" Dec 05 14:26:21 crc kubenswrapper[4858]: I1205 14:26:21.912534 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd" path="/var/lib/kubelet/pods/aaa09180-fd71-4a52-b7ff-7d9cdc3f06dd/volumes" Dec 05 14:26:30 crc kubenswrapper[4858]: I1205 14:26:30.367209 4858 generic.go:334] "Generic (PLEG): container finished" podID="6bfc7ad2-f490-4415-944c-43ef46ae66ce" containerID="cb1e86d25f7531988e63a33de4d46651ca1d3ba45f4a316565cb5ff576e03600" exitCode=0 Dec 05 14:26:30 crc kubenswrapper[4858]: I1205 14:26:30.367309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" event={"ID":"6bfc7ad2-f490-4415-944c-43ef46ae66ce","Type":"ContainerDied","Data":"cb1e86d25f7531988e63a33de4d46651ca1d3ba45f4a316565cb5ff576e03600"} Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.758686 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.799749 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-inventory\") pod \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.799804 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjxvf\" (UniqueName: \"kubernetes.io/projected/6bfc7ad2-f490-4415-944c-43ef46ae66ce-kube-api-access-jjxvf\") pod \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.807601 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfc7ad2-f490-4415-944c-43ef46ae66ce-kube-api-access-jjxvf" (OuterVolumeSpecName: "kube-api-access-jjxvf") pod "6bfc7ad2-f490-4415-944c-43ef46ae66ce" (UID: "6bfc7ad2-f490-4415-944c-43ef46ae66ce"). InnerVolumeSpecName "kube-api-access-jjxvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.827615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-inventory" (OuterVolumeSpecName: "inventory") pod "6bfc7ad2-f490-4415-944c-43ef46ae66ce" (UID: "6bfc7ad2-f490-4415-944c-43ef46ae66ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.901420 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-ssh-key\") pod \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\" (UID: \"6bfc7ad2-f490-4415-944c-43ef46ae66ce\") " Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.902147 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.902166 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjxvf\" (UniqueName: \"kubernetes.io/projected/6bfc7ad2-f490-4415-944c-43ef46ae66ce-kube-api-access-jjxvf\") on node \"crc\" DevicePath \"\"" Dec 05 14:26:31 crc kubenswrapper[4858]: I1205 14:26:31.933060 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6bfc7ad2-f490-4415-944c-43ef46ae66ce" (UID: "6bfc7ad2-f490-4415-944c-43ef46ae66ce"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.003659 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6bfc7ad2-f490-4415-944c-43ef46ae66ce-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.388309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" event={"ID":"6bfc7ad2-f490-4415-944c-43ef46ae66ce","Type":"ContainerDied","Data":"3294df97fa096befa03ef5acec9685bdd610f12ce3f9b1c8147eeec138164d66"} Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.388641 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3294df97fa096befa03ef5acec9685bdd610f12ce3f9b1c8147eeec138164d66" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.388363 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xg49c" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.492035 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n"] Dec 05 14:26:32 crc kubenswrapper[4858]: E1205 14:26:32.492525 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfc7ad2-f490-4415-944c-43ef46ae66ce" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.492549 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfc7ad2-f490-4415-944c-43ef46ae66ce" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.492865 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bfc7ad2-f490-4415-944c-43ef46ae66ce" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.493778 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.497024 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.497210 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.497352 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.518994 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.524128 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n"] Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.613790 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.613944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzkqv\" (UniqueName: \"kubernetes.io/projected/d903fbc3-5741-47cf-85bc-f5fd353e89fc-kube-api-access-lzkqv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.613985 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.717161 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.717326 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzkqv\" (UniqueName: \"kubernetes.io/projected/d903fbc3-5741-47cf-85bc-f5fd353e89fc-kube-api-access-lzkqv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.717386 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.723938 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.726309 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.746317 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzkqv\" (UniqueName: \"kubernetes.io/projected/d903fbc3-5741-47cf-85bc-f5fd353e89fc-kube-api-access-lzkqv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:32 crc kubenswrapper[4858]: I1205 14:26:32.843800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:26:33 crc kubenswrapper[4858]: I1205 14:26:33.448871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n"] Dec 05 14:26:33 crc kubenswrapper[4858]: I1205 14:26:33.915062 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:26:33 crc kubenswrapper[4858]: E1205 14:26:33.915778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:26:34 crc kubenswrapper[4858]: I1205 14:26:34.407387 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" event={"ID":"d903fbc3-5741-47cf-85bc-f5fd353e89fc","Type":"ContainerStarted","Data":"2e7b2523c8a9f68efd5c8bb24e7869f74ce5c6393549675b202b356ccf33087f"} Dec 05 14:26:35 crc kubenswrapper[4858]: I1205 14:26:35.417784 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" event={"ID":"d903fbc3-5741-47cf-85bc-f5fd353e89fc","Type":"ContainerStarted","Data":"330bb6294f1953e9bf5761f39f977e58597e79d45fb1d9a539f0ff19d4d8fbd8"} Dec 05 14:26:35 crc kubenswrapper[4858]: I1205 14:26:35.443788 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" podStartSLOduration=2.501487786 podStartE2EDuration="3.443767864s" podCreationTimestamp="2025-12-05 14:26:32 +0000 UTC" firstStartedPulling="2025-12-05 14:26:33.451941502 +0000 UTC m=+1801.999539641" lastFinishedPulling="2025-12-05 14:26:34.39422158 +0000 UTC m=+1802.941819719" observedRunningTime="2025-12-05 14:26:35.442379456 +0000 UTC m=+1803.989977635" watchObservedRunningTime="2025-12-05 14:26:35.443767864 +0000 UTC m=+1803.991366003" Dec 05 14:26:47 crc kubenswrapper[4858]: I1205 14:26:47.900064 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:26:47 crc kubenswrapper[4858]: E1205 14:26:47.901283 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:26:57 crc kubenswrapper[4858]: I1205 14:26:57.912837 4858 scope.go:117] "RemoveContainer" containerID="43e75b7cf74f1bebb6928b8b904df33609c5a0614a452248da75d92c95f07020" Dec 05 14:26:57 crc kubenswrapper[4858]: I1205 14:26:57.953026 4858 scope.go:117] "RemoveContainer" containerID="9ec7f3c7d56605d95fb866a8fd13d3cac9f348ecabe1632ff44025d37aced302" Dec 05 14:26:58 crc kubenswrapper[4858]: I1205 14:26:58.004008 4858 scope.go:117] "RemoveContainer" containerID="8c753ac2a459d60383289055d804ab3eda23dcab1c3ac42fbbdc119023a557fd" Dec 05 14:26:58 crc kubenswrapper[4858]: I1205 14:26:58.046175 4858 scope.go:117] "RemoveContainer" containerID="391ba69855cd14c436b0eec6786e635e6fe96366f292095edb7bfe314cefed77" Dec 05 14:26:58 crc kubenswrapper[4858]: I1205 14:26:58.093811 4858 scope.go:117] "RemoveContainer" containerID="dd0c43c5b3f457cd61d776f4e91369fb64de4b8d51af3fa02bcae90fc1f9ef34" Dec 05 14:26:58 crc kubenswrapper[4858]: I1205 14:26:58.899640 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:26:58 crc kubenswrapper[4858]: E1205 14:26:58.900377 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:27:12 crc kubenswrapper[4858]: I1205 14:27:12.899490 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:27:12 crc kubenswrapper[4858]: E1205 14:27:12.900313 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:27:25 crc kubenswrapper[4858]: I1205 14:27:25.900882 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:27:25 crc kubenswrapper[4858]: E1205 14:27:25.901665 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:27:36 crc kubenswrapper[4858]: I1205 14:27:36.900024 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:27:36 crc kubenswrapper[4858]: E1205 14:27:36.900745 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:27:40 crc kubenswrapper[4858]: I1205 14:27:40.043035 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7bdnq"] Dec 05 14:27:40 crc kubenswrapper[4858]: I1205 14:27:40.055708 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-6546b"] Dec 05 14:27:40 crc kubenswrapper[4858]: I1205 14:27:40.072027 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-6546b"] Dec 05 14:27:40 crc kubenswrapper[4858]: I1205 14:27:40.075378 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7bdnq"] Dec 05 14:27:41 crc kubenswrapper[4858]: I1205 14:27:41.045604 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jhglf"] Dec 05 14:27:41 crc kubenswrapper[4858]: I1205 14:27:41.057663 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jhglf"] Dec 05 14:27:41 crc kubenswrapper[4858]: I1205 14:27:41.912553 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2066f614-ad2b-4947-8c14-b9df8e78fcac" path="/var/lib/kubelet/pods/2066f614-ad2b-4947-8c14-b9df8e78fcac/volumes" Dec 05 14:27:41 crc kubenswrapper[4858]: I1205 14:27:41.913366 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960299c2-8250-45a8-a10c-c4ee4b105910" path="/var/lib/kubelet/pods/960299c2-8250-45a8-a10c-c4ee4b105910/volumes" Dec 05 14:27:41 crc kubenswrapper[4858]: I1205 14:27:41.914342 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdcbb580-deba-4812-a820-2170d122b199" path="/var/lib/kubelet/pods/fdcbb580-deba-4812-a820-2170d122b199/volumes" Dec 05 14:27:42 crc kubenswrapper[4858]: I1205 14:27:42.065649 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-eda8-account-create-update-4d2w5"] Dec 05 14:27:42 crc kubenswrapper[4858]: I1205 14:27:42.075494 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5ee4-account-create-update-l65v4"] Dec 05 14:27:42 crc kubenswrapper[4858]: I1205 14:27:42.096078 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9138-account-create-update-sj4qg"] Dec 05 14:27:42 crc kubenswrapper[4858]: I1205 14:27:42.106352 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-eda8-account-create-update-4d2w5"] Dec 05 14:27:42 crc kubenswrapper[4858]: I1205 14:27:42.119491 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5ee4-account-create-update-l65v4"] Dec 05 14:27:42 crc kubenswrapper[4858]: I1205 14:27:42.129715 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9138-account-create-update-sj4qg"] Dec 05 14:27:43 crc kubenswrapper[4858]: I1205 14:27:43.913113 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfbd339-df73-4eff-adbc-6394489044cd" path="/var/lib/kubelet/pods/9dfbd339-df73-4eff-adbc-6394489044cd/volumes" Dec 05 14:27:43 crc kubenswrapper[4858]: I1205 14:27:43.914394 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2455633-0480-46f9-b598-4d12d4414a5a" path="/var/lib/kubelet/pods/b2455633-0480-46f9-b598-4d12d4414a5a/volumes" Dec 05 14:27:43 crc kubenswrapper[4858]: I1205 14:27:43.915541 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd999106-5891-4eea-8021-c3c7d5899b3f" path="/var/lib/kubelet/pods/dd999106-5891-4eea-8021-c3c7d5899b3f/volumes" Dec 05 14:27:48 crc kubenswrapper[4858]: I1205 14:27:48.899757 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:27:48 crc kubenswrapper[4858]: E1205 14:27:48.900608 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:27:56 crc kubenswrapper[4858]: E1205 14:27:56.755547 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd903fbc3_5741_47cf_85bc_f5fd353e89fc.slice/crio-330bb6294f1953e9bf5761f39f977e58597e79d45fb1d9a539f0ff19d4d8fbd8.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:27:57 crc kubenswrapper[4858]: I1205 14:27:57.177129 4858 generic.go:334] "Generic (PLEG): container finished" podID="d903fbc3-5741-47cf-85bc-f5fd353e89fc" containerID="330bb6294f1953e9bf5761f39f977e58597e79d45fb1d9a539f0ff19d4d8fbd8" exitCode=0 Dec 05 14:27:57 crc kubenswrapper[4858]: I1205 14:27:57.177169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" event={"ID":"d903fbc3-5741-47cf-85bc-f5fd353e89fc","Type":"ContainerDied","Data":"330bb6294f1953e9bf5761f39f977e58597e79d45fb1d9a539f0ff19d4d8fbd8"} Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.257392 4858 scope.go:117] "RemoveContainer" containerID="09ff3f037151dae528501d6940ac6e8f4f26f89c9a66b470e4b51de25e3dce8e" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.306137 4858 scope.go:117] "RemoveContainer" containerID="a6cbc61d4e99c0e43c29c6c38ec09f5ab789de80f109581aefbaeb9284833700" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.335631 4858 scope.go:117] "RemoveContainer" containerID="f64ac43eb49d540b2c784c0f48cdae07eb2d65d75a2c20375089766bb67f9c4c" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.387269 4858 scope.go:117] "RemoveContainer" containerID="c1bedd768eb4843a65f48083eee24d486ba42f8c7892ce8faa7f2830a88aadfa" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.459204 4858 scope.go:117] "RemoveContainer" containerID="2238984a99460a7f402cc398cbe550c56546492b223d89eecd5105000ea2ba30" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.526579 4858 scope.go:117] "RemoveContainer" containerID="ae7626dabc430e9e77b6b7c9d6d693877ef922da9cba2621b611b8d00334bc3d" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.567867 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.706366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzkqv\" (UniqueName: \"kubernetes.io/projected/d903fbc3-5741-47cf-85bc-f5fd353e89fc-kube-api-access-lzkqv\") pod \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.706567 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-ssh-key\") pod \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.707224 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-inventory\") pod \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\" (UID: \"d903fbc3-5741-47cf-85bc-f5fd353e89fc\") " Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.713017 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d903fbc3-5741-47cf-85bc-f5fd353e89fc-kube-api-access-lzkqv" (OuterVolumeSpecName: "kube-api-access-lzkqv") pod "d903fbc3-5741-47cf-85bc-f5fd353e89fc" (UID: "d903fbc3-5741-47cf-85bc-f5fd353e89fc"). InnerVolumeSpecName "kube-api-access-lzkqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.733982 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d903fbc3-5741-47cf-85bc-f5fd353e89fc" (UID: "d903fbc3-5741-47cf-85bc-f5fd353e89fc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.735992 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-inventory" (OuterVolumeSpecName: "inventory") pod "d903fbc3-5741-47cf-85bc-f5fd353e89fc" (UID: "d903fbc3-5741-47cf-85bc-f5fd353e89fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.809748 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.809787 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d903fbc3-5741-47cf-85bc-f5fd353e89fc-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:27:58 crc kubenswrapper[4858]: I1205 14:27:58.809802 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzkqv\" (UniqueName: \"kubernetes.io/projected/d903fbc3-5741-47cf-85bc-f5fd353e89fc-kube-api-access-lzkqv\") on node \"crc\" DevicePath \"\"" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.195515 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" event={"ID":"d903fbc3-5741-47cf-85bc-f5fd353e89fc","Type":"ContainerDied","Data":"2e7b2523c8a9f68efd5c8bb24e7869f74ce5c6393549675b202b356ccf33087f"} Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.195567 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8cm2n" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.195575 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e7b2523c8a9f68efd5c8bb24e7869f74ce5c6393549675b202b356ccf33087f" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.297444 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9"] Dec 05 14:27:59 crc kubenswrapper[4858]: E1205 14:27:59.297835 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d903fbc3-5741-47cf-85bc-f5fd353e89fc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.297849 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d903fbc3-5741-47cf-85bc-f5fd353e89fc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.298044 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d903fbc3-5741-47cf-85bc-f5fd353e89fc" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.298638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.300748 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.300763 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.301218 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.301774 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.317608 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9"] Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.425189 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.425307 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwxqc\" (UniqueName: \"kubernetes.io/projected/601ec05d-4906-4c83-910d-d5c4a43c94dd-kube-api-access-hwxqc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.425358 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.527715 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwxqc\" (UniqueName: \"kubernetes.io/projected/601ec05d-4906-4c83-910d-d5c4a43c94dd-kube-api-access-hwxqc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.527784 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.527939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.531054 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.546563 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwxqc\" (UniqueName: \"kubernetes.io/projected/601ec05d-4906-4c83-910d-d5c4a43c94dd-kube-api-access-hwxqc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.548130 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:27:59 crc kubenswrapper[4858]: I1205 14:27:59.635629 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:28:00 crc kubenswrapper[4858]: I1205 14:28:00.147628 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9"] Dec 05 14:28:00 crc kubenswrapper[4858]: I1205 14:28:00.156240 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:28:00 crc kubenswrapper[4858]: I1205 14:28:00.203691 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" event={"ID":"601ec05d-4906-4c83-910d-d5c4a43c94dd","Type":"ContainerStarted","Data":"703f6fbfbaed807b14e8922847d823ece8784ba643fa1a0f5cadf27d94a005f1"} Dec 05 14:28:01 crc kubenswrapper[4858]: I1205 14:28:01.224475 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" event={"ID":"601ec05d-4906-4c83-910d-d5c4a43c94dd","Type":"ContainerStarted","Data":"84cb88ec20ead423e3b43de0c57baf65e544a9e6e687539896be6362c9128c5f"} Dec 05 14:28:01 crc kubenswrapper[4858]: I1205 14:28:01.241180 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" podStartSLOduration=1.785923256 podStartE2EDuration="2.241159675s" podCreationTimestamp="2025-12-05 14:27:59 +0000 UTC" firstStartedPulling="2025-12-05 14:28:00.155982889 +0000 UTC m=+1888.703581028" lastFinishedPulling="2025-12-05 14:28:00.611219308 +0000 UTC m=+1889.158817447" observedRunningTime="2025-12-05 14:28:01.238235703 +0000 UTC m=+1889.785833842" watchObservedRunningTime="2025-12-05 14:28:01.241159675 +0000 UTC m=+1889.788757814" Dec 05 14:28:03 crc kubenswrapper[4858]: I1205 14:28:03.899693 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:28:03 crc kubenswrapper[4858]: E1205 14:28:03.900122 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:28:06 crc kubenswrapper[4858]: I1205 14:28:06.270847 4858 generic.go:334] "Generic (PLEG): container finished" podID="601ec05d-4906-4c83-910d-d5c4a43c94dd" containerID="84cb88ec20ead423e3b43de0c57baf65e544a9e6e687539896be6362c9128c5f" exitCode=0 Dec 05 14:28:06 crc kubenswrapper[4858]: I1205 14:28:06.271121 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" event={"ID":"601ec05d-4906-4c83-910d-d5c4a43c94dd","Type":"ContainerDied","Data":"84cb88ec20ead423e3b43de0c57baf65e544a9e6e687539896be6362c9128c5f"} Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.736448 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.905985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-ssh-key\") pod \"601ec05d-4906-4c83-910d-d5c4a43c94dd\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.906041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwxqc\" (UniqueName: \"kubernetes.io/projected/601ec05d-4906-4c83-910d-d5c4a43c94dd-kube-api-access-hwxqc\") pod \"601ec05d-4906-4c83-910d-d5c4a43c94dd\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.906173 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-inventory\") pod \"601ec05d-4906-4c83-910d-d5c4a43c94dd\" (UID: \"601ec05d-4906-4c83-910d-d5c4a43c94dd\") " Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.917195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601ec05d-4906-4c83-910d-d5c4a43c94dd-kube-api-access-hwxqc" (OuterVolumeSpecName: "kube-api-access-hwxqc") pod "601ec05d-4906-4c83-910d-d5c4a43c94dd" (UID: "601ec05d-4906-4c83-910d-d5c4a43c94dd"). InnerVolumeSpecName "kube-api-access-hwxqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.936291 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-inventory" (OuterVolumeSpecName: "inventory") pod "601ec05d-4906-4c83-910d-d5c4a43c94dd" (UID: "601ec05d-4906-4c83-910d-d5c4a43c94dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:28:07 crc kubenswrapper[4858]: I1205 14:28:07.941072 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "601ec05d-4906-4c83-910d-d5c4a43c94dd" (UID: "601ec05d-4906-4c83-910d-d5c4a43c94dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.008024 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.008055 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/601ec05d-4906-4c83-910d-d5c4a43c94dd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.008065 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwxqc\" (UniqueName: \"kubernetes.io/projected/601ec05d-4906-4c83-910d-d5c4a43c94dd-kube-api-access-hwxqc\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.290142 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" event={"ID":"601ec05d-4906-4c83-910d-d5c4a43c94dd","Type":"ContainerDied","Data":"703f6fbfbaed807b14e8922847d823ece8784ba643fa1a0f5cadf27d94a005f1"} Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.290189 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="703f6fbfbaed807b14e8922847d823ece8784ba643fa1a0f5cadf27d94a005f1" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.290245 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ntbr9" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.365859 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh"] Dec 05 14:28:08 crc kubenswrapper[4858]: E1205 14:28:08.366275 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601ec05d-4906-4c83-910d-d5c4a43c94dd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.366287 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="601ec05d-4906-4c83-910d-d5c4a43c94dd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.366456 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="601ec05d-4906-4c83-910d-d5c4a43c94dd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.367049 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.369235 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.369264 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.369548 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.369556 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.386059 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh"] Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.516326 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.516419 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.516498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8zc\" (UniqueName: \"kubernetes.io/projected/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-kube-api-access-6c8zc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.618040 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.618170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8zc\" (UniqueName: \"kubernetes.io/projected/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-kube-api-access-6c8zc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.618293 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.622455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.623511 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.636985 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8zc\" (UniqueName: \"kubernetes.io/projected/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-kube-api-access-6c8zc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7j5xh\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:08 crc kubenswrapper[4858]: I1205 14:28:08.687451 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:09 crc kubenswrapper[4858]: I1205 14:28:09.172460 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh"] Dec 05 14:28:09 crc kubenswrapper[4858]: I1205 14:28:09.299510 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" event={"ID":"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3","Type":"ContainerStarted","Data":"5119af2bc634c77f9f6353dbb2274b485b51e58a3f7e764ac29e57b1982004f2"} Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.239598 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6cvz"] Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.242117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.260311 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6cvz"] Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.319859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" event={"ID":"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3","Type":"ContainerStarted","Data":"e5f65e023da5bad38c447e0bd0cc3824574fc9579004385f39d2648b591c4441"} Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.343463 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" podStartSLOduration=1.9394210649999999 podStartE2EDuration="2.34344985s" podCreationTimestamp="2025-12-05 14:28:08 +0000 UTC" firstStartedPulling="2025-12-05 14:28:09.17589872 +0000 UTC m=+1897.723496869" lastFinishedPulling="2025-12-05 14:28:09.579927515 +0000 UTC m=+1898.127525654" observedRunningTime="2025-12-05 14:28:10.341112034 +0000 UTC m=+1898.888710173" watchObservedRunningTime="2025-12-05 14:28:10.34344985 +0000 UTC m=+1898.891047989" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.349653 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-utilities\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.349736 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-catalog-content\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.351577 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rknm\" (UniqueName: \"kubernetes.io/projected/f0e7f453-6498-4c0b-a14a-04b0118939fc-kube-api-access-9rknm\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.453259 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-catalog-content\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.453607 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rknm\" (UniqueName: \"kubernetes.io/projected/f0e7f453-6498-4c0b-a14a-04b0118939fc-kube-api-access-9rknm\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.453721 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-utilities\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.454393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-utilities\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.454847 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-catalog-content\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.477599 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rknm\" (UniqueName: \"kubernetes.io/projected/f0e7f453-6498-4c0b-a14a-04b0118939fc-kube-api-access-9rknm\") pod \"certified-operators-c6cvz\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:10 crc kubenswrapper[4858]: I1205 14:28:10.559411 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:11 crc kubenswrapper[4858]: I1205 14:28:11.044416 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmsdb"] Dec 05 14:28:11 crc kubenswrapper[4858]: I1205 14:28:11.052365 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bmsdb"] Dec 05 14:28:11 crc kubenswrapper[4858]: I1205 14:28:11.105513 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6cvz"] Dec 05 14:28:11 crc kubenswrapper[4858]: I1205 14:28:11.335368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerStarted","Data":"b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8"} Dec 05 14:28:11 crc kubenswrapper[4858]: I1205 14:28:11.336492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerStarted","Data":"750bb75daacdb90d2511f7a48ba72910131550bbe9d9bf4ff0a32f254e4064ce"} Dec 05 14:28:11 crc kubenswrapper[4858]: I1205 14:28:11.910580 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eab50221-12a4-4a60-910e-d020c85a5e7a" path="/var/lib/kubelet/pods/eab50221-12a4-4a60-910e-d020c85a5e7a/volumes" Dec 05 14:28:12 crc kubenswrapper[4858]: I1205 14:28:12.345203 4858 generic.go:334] "Generic (PLEG): container finished" podID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerID="b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8" exitCode=0 Dec 05 14:28:12 crc kubenswrapper[4858]: I1205 14:28:12.345240 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerDied","Data":"b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8"} Dec 05 14:28:13 crc kubenswrapper[4858]: I1205 14:28:13.356760 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerStarted","Data":"6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d"} Dec 05 14:28:14 crc kubenswrapper[4858]: I1205 14:28:14.370105 4858 generic.go:334] "Generic (PLEG): container finished" podID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerID="6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d" exitCode=0 Dec 05 14:28:14 crc kubenswrapper[4858]: I1205 14:28:14.370166 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerDied","Data":"6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d"} Dec 05 14:28:15 crc kubenswrapper[4858]: I1205 14:28:15.380442 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerStarted","Data":"d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935"} Dec 05 14:28:15 crc kubenswrapper[4858]: I1205 14:28:15.400735 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6cvz" podStartSLOduration=2.9740580100000003 podStartE2EDuration="5.400711921s" podCreationTimestamp="2025-12-05 14:28:10 +0000 UTC" firstStartedPulling="2025-12-05 14:28:12.347100246 +0000 UTC m=+1900.894698385" lastFinishedPulling="2025-12-05 14:28:14.773754157 +0000 UTC m=+1903.321352296" observedRunningTime="2025-12-05 14:28:15.398987872 +0000 UTC m=+1903.946586021" watchObservedRunningTime="2025-12-05 14:28:15.400711921 +0000 UTC m=+1903.948310060" Dec 05 14:28:16 crc kubenswrapper[4858]: I1205 14:28:16.899752 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:28:16 crc kubenswrapper[4858]: E1205 14:28:16.900143 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:28:20 crc kubenswrapper[4858]: I1205 14:28:20.560218 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:20 crc kubenswrapper[4858]: I1205 14:28:20.560715 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:20 crc kubenswrapper[4858]: I1205 14:28:20.618707 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:21 crc kubenswrapper[4858]: I1205 14:28:21.466277 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:21 crc kubenswrapper[4858]: I1205 14:28:21.544041 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6cvz"] Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.439196 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6cvz" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="registry-server" containerID="cri-o://d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935" gracePeriod=2 Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.903996 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.913631 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-catalog-content\") pod \"f0e7f453-6498-4c0b-a14a-04b0118939fc\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.913868 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rknm\" (UniqueName: \"kubernetes.io/projected/f0e7f453-6498-4c0b-a14a-04b0118939fc-kube-api-access-9rknm\") pod \"f0e7f453-6498-4c0b-a14a-04b0118939fc\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.913964 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-utilities\") pod \"f0e7f453-6498-4c0b-a14a-04b0118939fc\" (UID: \"f0e7f453-6498-4c0b-a14a-04b0118939fc\") " Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.926019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-utilities" (OuterVolumeSpecName: "utilities") pod "f0e7f453-6498-4c0b-a14a-04b0118939fc" (UID: "f0e7f453-6498-4c0b-a14a-04b0118939fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:28:23 crc kubenswrapper[4858]: I1205 14:28:23.957841 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e7f453-6498-4c0b-a14a-04b0118939fc-kube-api-access-9rknm" (OuterVolumeSpecName: "kube-api-access-9rknm") pod "f0e7f453-6498-4c0b-a14a-04b0118939fc" (UID: "f0e7f453-6498-4c0b-a14a-04b0118939fc"). InnerVolumeSpecName "kube-api-access-9rknm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.008747 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0e7f453-6498-4c0b-a14a-04b0118939fc" (UID: "f0e7f453-6498-4c0b-a14a-04b0118939fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.017435 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rknm\" (UniqueName: \"kubernetes.io/projected/f0e7f453-6498-4c0b-a14a-04b0118939fc-kube-api-access-9rknm\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.017485 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.017496 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e7f453-6498-4c0b-a14a-04b0118939fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.449722 4858 generic.go:334] "Generic (PLEG): container finished" podID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerID="d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935" exitCode=0 Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.449766 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6cvz" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.449788 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerDied","Data":"d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935"} Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.449848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6cvz" event={"ID":"f0e7f453-6498-4c0b-a14a-04b0118939fc","Type":"ContainerDied","Data":"750bb75daacdb90d2511f7a48ba72910131550bbe9d9bf4ff0a32f254e4064ce"} Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.449871 4858 scope.go:117] "RemoveContainer" containerID="d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.485125 4858 scope.go:117] "RemoveContainer" containerID="6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.495971 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6cvz"] Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.505712 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6cvz"] Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.535131 4858 scope.go:117] "RemoveContainer" containerID="b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.560423 4858 scope.go:117] "RemoveContainer" containerID="d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935" Dec 05 14:28:24 crc kubenswrapper[4858]: E1205 14:28:24.560980 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935\": container with ID starting with d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935 not found: ID does not exist" containerID="d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.561100 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935"} err="failed to get container status \"d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935\": rpc error: code = NotFound desc = could not find container \"d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935\": container with ID starting with d132079f29bf7c086f0a8704bc27bb8d9455c9d13628ce7e6ef9e96b02fa7935 not found: ID does not exist" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.561179 4858 scope.go:117] "RemoveContainer" containerID="6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d" Dec 05 14:28:24 crc kubenswrapper[4858]: E1205 14:28:24.561980 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d\": container with ID starting with 6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d not found: ID does not exist" containerID="6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.562037 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d"} err="failed to get container status \"6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d\": rpc error: code = NotFound desc = could not find container \"6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d\": container with ID starting with 6bcbfa5a20c7357be99f8acce6554e530ec74f06d47865079779ac18e324c23d not found: ID does not exist" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.562067 4858 scope.go:117] "RemoveContainer" containerID="b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8" Dec 05 14:28:24 crc kubenswrapper[4858]: E1205 14:28:24.562419 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8\": container with ID starting with b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8 not found: ID does not exist" containerID="b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8" Dec 05 14:28:24 crc kubenswrapper[4858]: I1205 14:28:24.562485 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8"} err="failed to get container status \"b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8\": rpc error: code = NotFound desc = could not find container \"b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8\": container with ID starting with b9710c6ce70ee797385e6591bfbb55439a823058f385c997265415da8713bda8 not found: ID does not exist" Dec 05 14:28:25 crc kubenswrapper[4858]: I1205 14:28:25.909999 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" path="/var/lib/kubelet/pods/f0e7f453-6498-4c0b-a14a-04b0118939fc/volumes" Dec 05 14:28:30 crc kubenswrapper[4858]: I1205 14:28:30.899717 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:28:30 crc kubenswrapper[4858]: E1205 14:28:30.900481 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:28:36 crc kubenswrapper[4858]: I1205 14:28:36.084231 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7bj86"] Dec 05 14:28:36 crc kubenswrapper[4858]: I1205 14:28:36.095168 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7bj86"] Dec 05 14:28:37 crc kubenswrapper[4858]: I1205 14:28:37.912598 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0b9622-7ee2-433a-a43d-a2ea667bd7f7" path="/var/lib/kubelet/pods/2c0b9622-7ee2-433a-a43d-a2ea667bd7f7/volumes" Dec 05 14:28:38 crc kubenswrapper[4858]: I1205 14:28:38.043003 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46pzh"] Dec 05 14:28:38 crc kubenswrapper[4858]: I1205 14:28:38.055782 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46pzh"] Dec 05 14:28:39 crc kubenswrapper[4858]: I1205 14:28:39.918404 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb393c0-f903-4e17-82dd-84392e4231aa" path="/var/lib/kubelet/pods/1bb393c0-f903-4e17-82dd-84392e4231aa/volumes" Dec 05 14:28:44 crc kubenswrapper[4858]: I1205 14:28:44.899460 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:28:44 crc kubenswrapper[4858]: E1205 14:28:44.900168 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:28:51 crc kubenswrapper[4858]: I1205 14:28:51.682599 4858 generic.go:334] "Generic (PLEG): container finished" podID="7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" containerID="e5f65e023da5bad38c447e0bd0cc3824574fc9579004385f39d2648b591c4441" exitCode=0 Dec 05 14:28:51 crc kubenswrapper[4858]: I1205 14:28:51.682648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" event={"ID":"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3","Type":"ContainerDied","Data":"e5f65e023da5bad38c447e0bd0cc3824574fc9579004385f39d2648b591c4441"} Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.152008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.183597 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8zc\" (UniqueName: \"kubernetes.io/projected/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-kube-api-access-6c8zc\") pod \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.183657 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-inventory\") pod \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.183694 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-ssh-key\") pod \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\" (UID: \"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3\") " Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.193013 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-kube-api-access-6c8zc" (OuterVolumeSpecName: "kube-api-access-6c8zc") pod "7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" (UID: "7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3"). InnerVolumeSpecName "kube-api-access-6c8zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.225174 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-inventory" (OuterVolumeSpecName: "inventory") pod "7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" (UID: "7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.247247 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" (UID: "7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.286442 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8zc\" (UniqueName: \"kubernetes.io/projected/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-kube-api-access-6c8zc\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.286480 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.286492 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.703280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" event={"ID":"7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3","Type":"ContainerDied","Data":"5119af2bc634c77f9f6353dbb2274b485b51e58a3f7e764ac29e57b1982004f2"} Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.703315 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5119af2bc634c77f9f6353dbb2274b485b51e58a3f7e764ac29e57b1982004f2" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.703356 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7j5xh" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.804406 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf"] Dec 05 14:28:53 crc kubenswrapper[4858]: E1205 14:28:53.805031 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="extract-utilities" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805046 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="extract-utilities" Dec 05 14:28:53 crc kubenswrapper[4858]: E1205 14:28:53.805058 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805065 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:28:53 crc kubenswrapper[4858]: E1205 14:28:53.805090 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="extract-content" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805096 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="extract-content" Dec 05 14:28:53 crc kubenswrapper[4858]: E1205 14:28:53.805108 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="registry-server" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805113 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="registry-server" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805295 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e7f453-6498-4c0b-a14a-04b0118939fc" containerName="registry-server" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805316 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac6ddb1-69b5-4352-b5b5-02076bbf1fc3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.805919 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.808622 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.809596 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.810106 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.810669 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.833350 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf"] Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.897693 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2zg\" (UniqueName: \"kubernetes.io/projected/4e71d017-b1a8-4445-85a3-96a23867418e-kube-api-access-5h2zg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.897855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:53 crc kubenswrapper[4858]: I1205 14:28:53.897926 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.000007 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.000095 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.000129 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2zg\" (UniqueName: \"kubernetes.io/projected/4e71d017-b1a8-4445-85a3-96a23867418e-kube-api-access-5h2zg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.005387 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.005491 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.015840 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2zg\" (UniqueName: \"kubernetes.io/projected/4e71d017-b1a8-4445-85a3-96a23867418e-kube-api-access-5h2zg\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.125199 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.684311 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf"] Dec 05 14:28:54 crc kubenswrapper[4858]: I1205 14:28:54.715099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" event={"ID":"4e71d017-b1a8-4445-85a3-96a23867418e","Type":"ContainerStarted","Data":"a8f736d49ba5fe3aa9b912147cb0dc72fb214d074a9abf057fd72d8fb9239fc6"} Dec 05 14:28:55 crc kubenswrapper[4858]: I1205 14:28:55.724771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" event={"ID":"4e71d017-b1a8-4445-85a3-96a23867418e","Type":"ContainerStarted","Data":"cc7a9a2668ddd580a7a7a7d310e2682cb1b88119fe5fa29fcd68e980b2e935cc"} Dec 05 14:28:55 crc kubenswrapper[4858]: I1205 14:28:55.759405 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" podStartSLOduration=2.319262299 podStartE2EDuration="2.759386446s" podCreationTimestamp="2025-12-05 14:28:53 +0000 UTC" firstStartedPulling="2025-12-05 14:28:54.702292836 +0000 UTC m=+1943.249890975" lastFinishedPulling="2025-12-05 14:28:55.142416983 +0000 UTC m=+1943.690015122" observedRunningTime="2025-12-05 14:28:55.750318033 +0000 UTC m=+1944.297916192" watchObservedRunningTime="2025-12-05 14:28:55.759386446 +0000 UTC m=+1944.306984595" Dec 05 14:28:58 crc kubenswrapper[4858]: I1205 14:28:58.702165 4858 scope.go:117] "RemoveContainer" containerID="df796af70d02d561de7faa3ab20457fdeb0deb36e97d086dc241cc025289fc4c" Dec 05 14:28:58 crc kubenswrapper[4858]: I1205 14:28:58.729895 4858 scope.go:117] "RemoveContainer" containerID="9b82f2f0117c9462f2939ab45183c7a185d7bec6b48b4c77704b23e6abd9c29b" Dec 05 14:28:58 crc kubenswrapper[4858]: I1205 14:28:58.781877 4858 scope.go:117] "RemoveContainer" containerID="f6d9b43f5dd92889cc375b17ea5654a67f049c39e5a3cd348ad7a53f602d2850" Dec 05 14:28:58 crc kubenswrapper[4858]: I1205 14:28:58.902858 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:28:58 crc kubenswrapper[4858]: E1205 14:28:58.903178 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:29:11 crc kubenswrapper[4858]: I1205 14:29:11.906337 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:29:11 crc kubenswrapper[4858]: E1205 14:29:11.907195 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:29:21 crc kubenswrapper[4858]: I1205 14:29:21.038503 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-b5cvh"] Dec 05 14:29:21 crc kubenswrapper[4858]: I1205 14:29:21.053251 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-b5cvh"] Dec 05 14:29:21 crc kubenswrapper[4858]: I1205 14:29:21.913962 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4253ece2-408f-490c-82ef-56b8ae47aa21" path="/var/lib/kubelet/pods/4253ece2-408f-490c-82ef-56b8ae47aa21/volumes" Dec 05 14:29:24 crc kubenswrapper[4858]: I1205 14:29:24.899699 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:29:24 crc kubenswrapper[4858]: E1205 14:29:24.900291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:29:36 crc kubenswrapper[4858]: I1205 14:29:36.899500 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:29:36 crc kubenswrapper[4858]: E1205 14:29:36.900075 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:29:48 crc kubenswrapper[4858]: I1205 14:29:48.899213 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:29:49 crc kubenswrapper[4858]: I1205 14:29:49.195224 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"eb7be8b253883532e4d55d9c3cb45201cb840b781ba34bd639f96f87bb561d52"} Dec 05 14:29:52 crc kubenswrapper[4858]: I1205 14:29:52.220584 4858 generic.go:334] "Generic (PLEG): container finished" podID="4e71d017-b1a8-4445-85a3-96a23867418e" containerID="cc7a9a2668ddd580a7a7a7d310e2682cb1b88119fe5fa29fcd68e980b2e935cc" exitCode=0 Dec 05 14:29:52 crc kubenswrapper[4858]: I1205 14:29:52.220644 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" event={"ID":"4e71d017-b1a8-4445-85a3-96a23867418e","Type":"ContainerDied","Data":"cc7a9a2668ddd580a7a7a7d310e2682cb1b88119fe5fa29fcd68e980b2e935cc"} Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.648898 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.773261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-inventory\") pod \"4e71d017-b1a8-4445-85a3-96a23867418e\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.773625 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h2zg\" (UniqueName: \"kubernetes.io/projected/4e71d017-b1a8-4445-85a3-96a23867418e-kube-api-access-5h2zg\") pod \"4e71d017-b1a8-4445-85a3-96a23867418e\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.774494 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-ssh-key\") pod \"4e71d017-b1a8-4445-85a3-96a23867418e\" (UID: \"4e71d017-b1a8-4445-85a3-96a23867418e\") " Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.792989 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e71d017-b1a8-4445-85a3-96a23867418e-kube-api-access-5h2zg" (OuterVolumeSpecName: "kube-api-access-5h2zg") pod "4e71d017-b1a8-4445-85a3-96a23867418e" (UID: "4e71d017-b1a8-4445-85a3-96a23867418e"). InnerVolumeSpecName "kube-api-access-5h2zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.809531 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-inventory" (OuterVolumeSpecName: "inventory") pod "4e71d017-b1a8-4445-85a3-96a23867418e" (UID: "4e71d017-b1a8-4445-85a3-96a23867418e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.814221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e71d017-b1a8-4445-85a3-96a23867418e" (UID: "4e71d017-b1a8-4445-85a3-96a23867418e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.876613 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.876683 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e71d017-b1a8-4445-85a3-96a23867418e-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:29:53 crc kubenswrapper[4858]: I1205 14:29:53.876694 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h2zg\" (UniqueName: \"kubernetes.io/projected/4e71d017-b1a8-4445-85a3-96a23867418e-kube-api-access-5h2zg\") on node \"crc\" DevicePath \"\"" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.239767 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" event={"ID":"4e71d017-b1a8-4445-85a3-96a23867418e","Type":"ContainerDied","Data":"a8f736d49ba5fe3aa9b912147cb0dc72fb214d074a9abf057fd72d8fb9239fc6"} Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.240302 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f736d49ba5fe3aa9b912147cb0dc72fb214d074a9abf057fd72d8fb9239fc6" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.240014 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-9wgvf" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.329277 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9mzxm"] Dec 05 14:29:54 crc kubenswrapper[4858]: E1205 14:29:54.329665 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e71d017-b1a8-4445-85a3-96a23867418e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.329683 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e71d017-b1a8-4445-85a3-96a23867418e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.329912 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e71d017-b1a8-4445-85a3-96a23867418e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.330494 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.332399 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.336282 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.336528 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.337581 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.357643 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9mzxm"] Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.387334 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.387489 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.387618 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5b7\" (UniqueName: \"kubernetes.io/projected/d6753340-0b58-47dc-8fe8-b1ca10d94278-kube-api-access-pr5b7\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.488895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.488977 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5b7\" (UniqueName: \"kubernetes.io/projected/d6753340-0b58-47dc-8fe8-b1ca10d94278-kube-api-access-pr5b7\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.489046 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.502240 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.502739 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.513965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5b7\" (UniqueName: \"kubernetes.io/projected/d6753340-0b58-47dc-8fe8-b1ca10d94278-kube-api-access-pr5b7\") pod \"ssh-known-hosts-edpm-deployment-9mzxm\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:54 crc kubenswrapper[4858]: I1205 14:29:54.649075 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:29:55 crc kubenswrapper[4858]: I1205 14:29:55.213124 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9mzxm"] Dec 05 14:29:55 crc kubenswrapper[4858]: I1205 14:29:55.248680 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" event={"ID":"d6753340-0b58-47dc-8fe8-b1ca10d94278","Type":"ContainerStarted","Data":"d9ec3b4497b1a7bbfff05986ba41399225a8f78fa153f412151a930d65b1b4e7"} Dec 05 14:29:56 crc kubenswrapper[4858]: I1205 14:29:56.257811 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" event={"ID":"d6753340-0b58-47dc-8fe8-b1ca10d94278","Type":"ContainerStarted","Data":"7a16ab683b9cfa622d341a4233281136ce19ebd75a3d94dba0d8131319f17a5b"} Dec 05 14:29:56 crc kubenswrapper[4858]: I1205 14:29:56.278492 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" podStartSLOduration=1.894524223 podStartE2EDuration="2.278472356s" podCreationTimestamp="2025-12-05 14:29:54 +0000 UTC" firstStartedPulling="2025-12-05 14:29:55.214727001 +0000 UTC m=+2003.762325140" lastFinishedPulling="2025-12-05 14:29:55.598675114 +0000 UTC m=+2004.146273273" observedRunningTime="2025-12-05 14:29:56.271253614 +0000 UTC m=+2004.818851753" watchObservedRunningTime="2025-12-05 14:29:56.278472356 +0000 UTC m=+2004.826070515" Dec 05 14:29:58 crc kubenswrapper[4858]: I1205 14:29:58.923080 4858 scope.go:117] "RemoveContainer" containerID="7412c6ab7b334406184aa1471effa37d196ed5504abd0e29f1cf2dfedc69628f" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.137311 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv"] Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.139032 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.142509 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.142733 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.158528 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv"] Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.308896 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ca352c-c946-4269-9163-1adaf9364d32-config-volume\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.308983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvfc\" (UniqueName: \"kubernetes.io/projected/39ca352c-c946-4269-9163-1adaf9364d32-kube-api-access-hzvfc\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.309030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39ca352c-c946-4269-9163-1adaf9364d32-secret-volume\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.410801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ca352c-c946-4269-9163-1adaf9364d32-config-volume\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.410901 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvfc\" (UniqueName: \"kubernetes.io/projected/39ca352c-c946-4269-9163-1adaf9364d32-kube-api-access-hzvfc\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.410939 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39ca352c-c946-4269-9163-1adaf9364d32-secret-volume\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.412119 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ca352c-c946-4269-9163-1adaf9364d32-config-volume\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.424628 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39ca352c-c946-4269-9163-1adaf9364d32-secret-volume\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.427478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvfc\" (UniqueName: \"kubernetes.io/projected/39ca352c-c946-4269-9163-1adaf9364d32-kube-api-access-hzvfc\") pod \"collect-profiles-29415750-tgzbv\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.469350 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:00 crc kubenswrapper[4858]: I1205 14:30:00.931998 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv"] Dec 05 14:30:01 crc kubenswrapper[4858]: I1205 14:30:01.307456 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" event={"ID":"39ca352c-c946-4269-9163-1adaf9364d32","Type":"ContainerStarted","Data":"63041a3304dc05f4f2d720233cafaf2765acc456c3804cc8db2e07c2bf3911fa"} Dec 05 14:30:01 crc kubenswrapper[4858]: I1205 14:30:01.307755 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" event={"ID":"39ca352c-c946-4269-9163-1adaf9364d32","Type":"ContainerStarted","Data":"28c1f0ce84d56f91012a773f40cd4dde6b99a3c283fc1095b89fb4fc7afcb4b5"} Dec 05 14:30:01 crc kubenswrapper[4858]: I1205 14:30:01.331390 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" podStartSLOduration=1.331371291 podStartE2EDuration="1.331371291s" podCreationTimestamp="2025-12-05 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 14:30:01.331365201 +0000 UTC m=+2009.878963340" watchObservedRunningTime="2025-12-05 14:30:01.331371291 +0000 UTC m=+2009.878969430" Dec 05 14:30:02 crc kubenswrapper[4858]: I1205 14:30:02.315696 4858 generic.go:334] "Generic (PLEG): container finished" podID="39ca352c-c946-4269-9163-1adaf9364d32" containerID="63041a3304dc05f4f2d720233cafaf2765acc456c3804cc8db2e07c2bf3911fa" exitCode=0 Dec 05 14:30:02 crc kubenswrapper[4858]: I1205 14:30:02.315778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" event={"ID":"39ca352c-c946-4269-9163-1adaf9364d32","Type":"ContainerDied","Data":"63041a3304dc05f4f2d720233cafaf2765acc456c3804cc8db2e07c2bf3911fa"} Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.325949 4858 generic.go:334] "Generic (PLEG): container finished" podID="d6753340-0b58-47dc-8fe8-b1ca10d94278" containerID="7a16ab683b9cfa622d341a4233281136ce19ebd75a3d94dba0d8131319f17a5b" exitCode=0 Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.326050 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" event={"ID":"d6753340-0b58-47dc-8fe8-b1ca10d94278","Type":"ContainerDied","Data":"7a16ab683b9cfa622d341a4233281136ce19ebd75a3d94dba0d8131319f17a5b"} Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.850232 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.936107 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39ca352c-c946-4269-9163-1adaf9364d32-secret-volume\") pod \"39ca352c-c946-4269-9163-1adaf9364d32\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.936189 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzvfc\" (UniqueName: \"kubernetes.io/projected/39ca352c-c946-4269-9163-1adaf9364d32-kube-api-access-hzvfc\") pod \"39ca352c-c946-4269-9163-1adaf9364d32\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.937171 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ca352c-c946-4269-9163-1adaf9364d32-config-volume\") pod \"39ca352c-c946-4269-9163-1adaf9364d32\" (UID: \"39ca352c-c946-4269-9163-1adaf9364d32\") " Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.937733 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ca352c-c946-4269-9163-1adaf9364d32-config-volume" (OuterVolumeSpecName: "config-volume") pod "39ca352c-c946-4269-9163-1adaf9364d32" (UID: "39ca352c-c946-4269-9163-1adaf9364d32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.937969 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39ca352c-c946-4269-9163-1adaf9364d32-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.941450 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39ca352c-c946-4269-9163-1adaf9364d32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "39ca352c-c946-4269-9163-1adaf9364d32" (UID: "39ca352c-c946-4269-9163-1adaf9364d32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:03 crc kubenswrapper[4858]: I1205 14:30:03.942029 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ca352c-c946-4269-9163-1adaf9364d32-kube-api-access-hzvfc" (OuterVolumeSpecName: "kube-api-access-hzvfc") pod "39ca352c-c946-4269-9163-1adaf9364d32" (UID: "39ca352c-c946-4269-9163-1adaf9364d32"). InnerVolumeSpecName "kube-api-access-hzvfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.039368 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39ca352c-c946-4269-9163-1adaf9364d32-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.039406 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzvfc\" (UniqueName: \"kubernetes.io/projected/39ca352c-c946-4269-9163-1adaf9364d32-kube-api-access-hzvfc\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.335476 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.335727 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv" event={"ID":"39ca352c-c946-4269-9163-1adaf9364d32","Type":"ContainerDied","Data":"28c1f0ce84d56f91012a773f40cd4dde6b99a3c283fc1095b89fb4fc7afcb4b5"} Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.335772 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c1f0ce84d56f91012a773f40cd4dde6b99a3c283fc1095b89fb4fc7afcb4b5" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.401463 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb"] Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.409146 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415705-5fszb"] Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.694178 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.757124 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr5b7\" (UniqueName: \"kubernetes.io/projected/d6753340-0b58-47dc-8fe8-b1ca10d94278-kube-api-access-pr5b7\") pod \"d6753340-0b58-47dc-8fe8-b1ca10d94278\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.757187 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam\") pod \"d6753340-0b58-47dc-8fe8-b1ca10d94278\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.757266 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-inventory-0\") pod \"d6753340-0b58-47dc-8fe8-b1ca10d94278\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.762518 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6753340-0b58-47dc-8fe8-b1ca10d94278-kube-api-access-pr5b7" (OuterVolumeSpecName: "kube-api-access-pr5b7") pod "d6753340-0b58-47dc-8fe8-b1ca10d94278" (UID: "d6753340-0b58-47dc-8fe8-b1ca10d94278"). InnerVolumeSpecName "kube-api-access-pr5b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:30:04 crc kubenswrapper[4858]: E1205 14:30:04.783496 4858 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam podName:d6753340-0b58-47dc-8fe8-b1ca10d94278 nodeName:}" failed. No retries permitted until 2025-12-05 14:30:05.28347299 +0000 UTC m=+2013.831071129 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam") pod "d6753340-0b58-47dc-8fe8-b1ca10d94278" (UID: "d6753340-0b58-47dc-8fe8-b1ca10d94278") : error deleting /var/lib/kubelet/pods/d6753340-0b58-47dc-8fe8-b1ca10d94278/volume-subpaths: remove /var/lib/kubelet/pods/d6753340-0b58-47dc-8fe8-b1ca10d94278/volume-subpaths: no such file or directory Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.786086 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d6753340-0b58-47dc-8fe8-b1ca10d94278" (UID: "d6753340-0b58-47dc-8fe8-b1ca10d94278"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.859759 4858 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:04 crc kubenswrapper[4858]: I1205 14:30:04.859787 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr5b7\" (UniqueName: \"kubernetes.io/projected/d6753340-0b58-47dc-8fe8-b1ca10d94278-kube-api-access-pr5b7\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.345705 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" event={"ID":"d6753340-0b58-47dc-8fe8-b1ca10d94278","Type":"ContainerDied","Data":"d9ec3b4497b1a7bbfff05986ba41399225a8f78fa153f412151a930d65b1b4e7"} Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.345757 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ec3b4497b1a7bbfff05986ba41399225a8f78fa153f412151a930d65b1b4e7" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.345845 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9mzxm" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.370592 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam\") pod \"d6753340-0b58-47dc-8fe8-b1ca10d94278\" (UID: \"d6753340-0b58-47dc-8fe8-b1ca10d94278\") " Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.374309 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6753340-0b58-47dc-8fe8-b1ca10d94278" (UID: "d6753340-0b58-47dc-8fe8-b1ca10d94278"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.489864 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6753340-0b58-47dc-8fe8-b1ca10d94278-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.591865 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn"] Dec 05 14:30:05 crc kubenswrapper[4858]: E1205 14:30:05.592322 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ca352c-c946-4269-9163-1adaf9364d32" containerName="collect-profiles" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.592341 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ca352c-c946-4269-9163-1adaf9364d32" containerName="collect-profiles" Dec 05 14:30:05 crc kubenswrapper[4858]: E1205 14:30:05.592380 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6753340-0b58-47dc-8fe8-b1ca10d94278" containerName="ssh-known-hosts-edpm-deployment" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.592387 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6753340-0b58-47dc-8fe8-b1ca10d94278" containerName="ssh-known-hosts-edpm-deployment" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.592560 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ca352c-c946-4269-9163-1adaf9364d32" containerName="collect-profiles" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.592584 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6753340-0b58-47dc-8fe8-b1ca10d94278" containerName="ssh-known-hosts-edpm-deployment" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.593240 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.603790 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn"] Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.693080 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9qxz\" (UniqueName: \"kubernetes.io/projected/8c589462-7f5e-4d4d-9bfa-c0586e598000-kube-api-access-z9qxz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.693150 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.693577 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.795266 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.795604 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9qxz\" (UniqueName: \"kubernetes.io/projected/8c589462-7f5e-4d4d-9bfa-c0586e598000-kube-api-access-z9qxz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.795670 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.808633 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.809505 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.816702 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9qxz\" (UniqueName: \"kubernetes.io/projected/8c589462-7f5e-4d4d-9bfa-c0586e598000-kube-api-access-z9qxz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lgzrn\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.913776 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedb2565-0837-4473-89e6-84269d6e3766" path="/var/lib/kubelet/pods/cedb2565-0837-4473-89e6-84269d6e3766/volumes" Dec 05 14:30:05 crc kubenswrapper[4858]: I1205 14:30:05.921012 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:06 crc kubenswrapper[4858]: I1205 14:30:06.432451 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn"] Dec 05 14:30:06 crc kubenswrapper[4858]: W1205 14:30:06.440252 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c589462_7f5e_4d4d_9bfa_c0586e598000.slice/crio-a7c0f88e8bb9004cd00da7287a1dd910132fdaad9ad4d1ac71f34b3376395dfa WatchSource:0}: Error finding container a7c0f88e8bb9004cd00da7287a1dd910132fdaad9ad4d1ac71f34b3376395dfa: Status 404 returned error can't find the container with id a7c0f88e8bb9004cd00da7287a1dd910132fdaad9ad4d1ac71f34b3376395dfa Dec 05 14:30:07 crc kubenswrapper[4858]: I1205 14:30:07.363042 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" event={"ID":"8c589462-7f5e-4d4d-9bfa-c0586e598000","Type":"ContainerStarted","Data":"a2e6b35aba827a310d0a1271b09d985ca2011a20a8b504b4359609163a93aff0"} Dec 05 14:30:07 crc kubenswrapper[4858]: I1205 14:30:07.363750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" event={"ID":"8c589462-7f5e-4d4d-9bfa-c0586e598000","Type":"ContainerStarted","Data":"a7c0f88e8bb9004cd00da7287a1dd910132fdaad9ad4d1ac71f34b3376395dfa"} Dec 05 14:30:07 crc kubenswrapper[4858]: I1205 14:30:07.386118 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" podStartSLOduration=1.942510978 podStartE2EDuration="2.38610352s" podCreationTimestamp="2025-12-05 14:30:05 +0000 UTC" firstStartedPulling="2025-12-05 14:30:06.444055534 +0000 UTC m=+2014.991653673" lastFinishedPulling="2025-12-05 14:30:06.887648076 +0000 UTC m=+2015.435246215" observedRunningTime="2025-12-05 14:30:07.381516473 +0000 UTC m=+2015.929114612" watchObservedRunningTime="2025-12-05 14:30:07.38610352 +0000 UTC m=+2015.933701659" Dec 05 14:30:17 crc kubenswrapper[4858]: I1205 14:30:17.440571 4858 generic.go:334] "Generic (PLEG): container finished" podID="8c589462-7f5e-4d4d-9bfa-c0586e598000" containerID="a2e6b35aba827a310d0a1271b09d985ca2011a20a8b504b4359609163a93aff0" exitCode=0 Dec 05 14:30:17 crc kubenswrapper[4858]: I1205 14:30:17.440629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" event={"ID":"8c589462-7f5e-4d4d-9bfa-c0586e598000","Type":"ContainerDied","Data":"a2e6b35aba827a310d0a1271b09d985ca2011a20a8b504b4359609163a93aff0"} Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.845843 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.915643 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-ssh-key\") pod \"8c589462-7f5e-4d4d-9bfa-c0586e598000\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.915768 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9qxz\" (UniqueName: \"kubernetes.io/projected/8c589462-7f5e-4d4d-9bfa-c0586e598000-kube-api-access-z9qxz\") pod \"8c589462-7f5e-4d4d-9bfa-c0586e598000\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.915842 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-inventory\") pod \"8c589462-7f5e-4d4d-9bfa-c0586e598000\" (UID: \"8c589462-7f5e-4d4d-9bfa-c0586e598000\") " Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.922007 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c589462-7f5e-4d4d-9bfa-c0586e598000-kube-api-access-z9qxz" (OuterVolumeSpecName: "kube-api-access-z9qxz") pod "8c589462-7f5e-4d4d-9bfa-c0586e598000" (UID: "8c589462-7f5e-4d4d-9bfa-c0586e598000"). InnerVolumeSpecName "kube-api-access-z9qxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.946200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-inventory" (OuterVolumeSpecName: "inventory") pod "8c589462-7f5e-4d4d-9bfa-c0586e598000" (UID: "8c589462-7f5e-4d4d-9bfa-c0586e598000"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:18 crc kubenswrapper[4858]: I1205 14:30:18.948406 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c589462-7f5e-4d4d-9bfa-c0586e598000" (UID: "8c589462-7f5e-4d4d-9bfa-c0586e598000"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.018062 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9qxz\" (UniqueName: \"kubernetes.io/projected/8c589462-7f5e-4d4d-9bfa-c0586e598000-kube-api-access-z9qxz\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.018102 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.018137 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c589462-7f5e-4d4d-9bfa-c0586e598000-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.457953 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" event={"ID":"8c589462-7f5e-4d4d-9bfa-c0586e598000","Type":"ContainerDied","Data":"a7c0f88e8bb9004cd00da7287a1dd910132fdaad9ad4d1ac71f34b3376395dfa"} Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.457994 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7c0f88e8bb9004cd00da7287a1dd910132fdaad9ad4d1ac71f34b3376395dfa" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.458008 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lgzrn" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.567792 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9"] Dec 05 14:30:19 crc kubenswrapper[4858]: E1205 14:30:19.568301 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c589462-7f5e-4d4d-9bfa-c0586e598000" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.568328 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c589462-7f5e-4d4d-9bfa-c0586e598000" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.568571 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c589462-7f5e-4d4d-9bfa-c0586e598000" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.569579 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.574198 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.574414 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.574592 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.574752 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.632208 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.632267 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.632341 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khgq8\" (UniqueName: \"kubernetes.io/projected/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-kube-api-access-khgq8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.715378 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9"] Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.733674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khgq8\" (UniqueName: \"kubernetes.io/projected/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-kube-api-access-khgq8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.734043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.734153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.745930 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.753487 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.761673 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khgq8\" (UniqueName: \"kubernetes.io/projected/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-kube-api-access-khgq8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:19 crc kubenswrapper[4858]: I1205 14:30:19.894504 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:20 crc kubenswrapper[4858]: I1205 14:30:20.370991 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9"] Dec 05 14:30:20 crc kubenswrapper[4858]: I1205 14:30:20.467099 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" event={"ID":"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680","Type":"ContainerStarted","Data":"832a4b8bc1362126a588090d9ca9eec5c388ebf3719bd1a82ff9d4e474fb7af5"} Dec 05 14:30:21 crc kubenswrapper[4858]: I1205 14:30:21.479438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" event={"ID":"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680","Type":"ContainerStarted","Data":"cd4221dfc522e6c872b2a4149c6b6a66c11db04496b78219027885d62eedd46b"} Dec 05 14:30:21 crc kubenswrapper[4858]: I1205 14:30:21.522663 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" podStartSLOduration=2.12365656 podStartE2EDuration="2.522635875s" podCreationTimestamp="2025-12-05 14:30:19 +0000 UTC" firstStartedPulling="2025-12-05 14:30:20.376961722 +0000 UTC m=+2028.924559861" lastFinishedPulling="2025-12-05 14:30:20.775941037 +0000 UTC m=+2029.323539176" observedRunningTime="2025-12-05 14:30:21.501484418 +0000 UTC m=+2030.049082577" watchObservedRunningTime="2025-12-05 14:30:21.522635875 +0000 UTC m=+2030.070234014" Dec 05 14:30:32 crc kubenswrapper[4858]: I1205 14:30:32.582419 4858 generic.go:334] "Generic (PLEG): container finished" podID="b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" containerID="cd4221dfc522e6c872b2a4149c6b6a66c11db04496b78219027885d62eedd46b" exitCode=0 Dec 05 14:30:32 crc kubenswrapper[4858]: I1205 14:30:32.582516 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" event={"ID":"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680","Type":"ContainerDied","Data":"cd4221dfc522e6c872b2a4149c6b6a66c11db04496b78219027885d62eedd46b"} Dec 05 14:30:33 crc kubenswrapper[4858]: I1205 14:30:33.995985 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.146943 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-ssh-key\") pod \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.147165 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khgq8\" (UniqueName: \"kubernetes.io/projected/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-kube-api-access-khgq8\") pod \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.147196 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-inventory\") pod \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\" (UID: \"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680\") " Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.152963 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-kube-api-access-khgq8" (OuterVolumeSpecName: "kube-api-access-khgq8") pod "b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" (UID: "b3bd07f6-0d66-49b4-bcaa-eb993e5bc680"). InnerVolumeSpecName "kube-api-access-khgq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.173794 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" (UID: "b3bd07f6-0d66-49b4-bcaa-eb993e5bc680"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.175962 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-inventory" (OuterVolumeSpecName: "inventory") pod "b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" (UID: "b3bd07f6-0d66-49b4-bcaa-eb993e5bc680"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.249872 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khgq8\" (UniqueName: \"kubernetes.io/projected/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-kube-api-access-khgq8\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.250216 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.250230 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3bd07f6-0d66-49b4-bcaa-eb993e5bc680-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.601492 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" event={"ID":"b3bd07f6-0d66-49b4-bcaa-eb993e5bc680","Type":"ContainerDied","Data":"832a4b8bc1362126a588090d9ca9eec5c388ebf3719bd1a82ff9d4e474fb7af5"} Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.601532 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832a4b8bc1362126a588090d9ca9eec5c388ebf3719bd1a82ff9d4e474fb7af5" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.601559 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wqww9" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.694174 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7"] Dec 05 14:30:34 crc kubenswrapper[4858]: E1205 14:30:34.694609 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.694626 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.694790 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3bd07f6-0d66-49b4-bcaa-eb993e5bc680" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.695417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.701458 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.701552 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.701676 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.701901 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.702027 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.709051 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.709283 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.709439 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.717487 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7"] Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860051 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860112 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860132 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860236 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860331 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860372 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860401 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860490 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860534 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860555 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860586 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qkv\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-kube-api-access-28qkv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860861 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.860956 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.861060 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963017 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963100 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963165 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963229 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963244 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963269 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963299 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963319 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963387 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963464 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.963505 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qkv\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-kube-api-access-28qkv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.968581 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.969955 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.973208 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.973276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.973431 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.974170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.974753 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.975609 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.976179 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.976428 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.976977 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.977042 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.979434 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:34 crc kubenswrapper[4858]: I1205 14:30:34.987151 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qkv\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-kube-api-access-28qkv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:35 crc kubenswrapper[4858]: I1205 14:30:35.013806 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:30:35 crc kubenswrapper[4858]: I1205 14:30:35.537597 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7"] Dec 05 14:30:35 crc kubenswrapper[4858]: I1205 14:30:35.614196 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" event={"ID":"56c98e22-cfab-43d6-b579-b2444cdbccb2","Type":"ContainerStarted","Data":"c1be8b0b79ef5f195d5bd1e7eaaccc40c391691c91bd3a762f5ab2d5a7e45c98"} Dec 05 14:30:36 crc kubenswrapper[4858]: I1205 14:30:36.623378 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" event={"ID":"56c98e22-cfab-43d6-b579-b2444cdbccb2","Type":"ContainerStarted","Data":"c232dfb9be6003a461db0303ca027124d3971a3b4b4aca05ecab5bc5329e99e8"} Dec 05 14:30:36 crc kubenswrapper[4858]: I1205 14:30:36.646514 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" podStartSLOduration=2.015377499 podStartE2EDuration="2.646495872s" podCreationTimestamp="2025-12-05 14:30:34 +0000 UTC" firstStartedPulling="2025-12-05 14:30:35.540758776 +0000 UTC m=+2044.088356935" lastFinishedPulling="2025-12-05 14:30:36.171877149 +0000 UTC m=+2044.719475308" observedRunningTime="2025-12-05 14:30:36.639068017 +0000 UTC m=+2045.186666156" watchObservedRunningTime="2025-12-05 14:30:36.646495872 +0000 UTC m=+2045.194094001" Dec 05 14:30:58 crc kubenswrapper[4858]: I1205 14:30:58.988932 4858 scope.go:117] "RemoveContainer" containerID="c2104c72b6990c443ed3bc7434b5b7ccc9fcc3df8306832fa903138c5327e226" Dec 05 14:31:22 crc kubenswrapper[4858]: I1205 14:31:22.194086 4858 generic.go:334] "Generic (PLEG): container finished" podID="56c98e22-cfab-43d6-b579-b2444cdbccb2" containerID="c232dfb9be6003a461db0303ca027124d3971a3b4b4aca05ecab5bc5329e99e8" exitCode=0 Dec 05 14:31:22 crc kubenswrapper[4858]: I1205 14:31:22.194243 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" event={"ID":"56c98e22-cfab-43d6-b579-b2444cdbccb2","Type":"ContainerDied","Data":"c232dfb9be6003a461db0303ca027124d3971a3b4b4aca05ecab5bc5329e99e8"} Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.672381 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.817819 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-nova-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.817941 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-inventory\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818001 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qkv\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-kube-api-access-28qkv\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818029 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-libvirt-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818076 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-telemetry-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818164 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-ovn-default-certs-0\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818185 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-neutron-metadata-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818237 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ssh-key\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818265 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-bootstrap-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ovn-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818366 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.818399 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-repo-setup-combined-ca-bundle\") pod \"56c98e22-cfab-43d6-b579-b2444cdbccb2\" (UID: \"56c98e22-cfab-43d6-b579-b2444cdbccb2\") " Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.824778 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.827050 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.827271 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.828133 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.828169 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.828200 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-kube-api-access-28qkv" (OuterVolumeSpecName: "kube-api-access-28qkv") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "kube-api-access-28qkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.828688 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.828719 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.829678 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.832065 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.832981 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.840931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.848660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.852398 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-inventory" (OuterVolumeSpecName: "inventory") pod "56c98e22-cfab-43d6-b579-b2444cdbccb2" (UID: "56c98e22-cfab-43d6-b579-b2444cdbccb2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922747 4858 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922776 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922787 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922797 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922806 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922814 4858 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922850 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922861 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922869 4858 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922879 4858 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922887 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922894 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qkv\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-kube-api-access-28qkv\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922901 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c98e22-cfab-43d6-b579-b2444cdbccb2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:23 crc kubenswrapper[4858]: I1205 14:31:23.922910 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/56c98e22-cfab-43d6-b579-b2444cdbccb2-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.326190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" event={"ID":"56c98e22-cfab-43d6-b579-b2444cdbccb2","Type":"ContainerDied","Data":"c1be8b0b79ef5f195d5bd1e7eaaccc40c391691c91bd3a762f5ab2d5a7e45c98"} Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.326225 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nlhz7" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.326229 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1be8b0b79ef5f195d5bd1e7eaaccc40c391691c91bd3a762f5ab2d5a7e45c98" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.450316 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5"] Dec 05 14:31:24 crc kubenswrapper[4858]: E1205 14:31:24.450736 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c98e22-cfab-43d6-b579-b2444cdbccb2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.450755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c98e22-cfab-43d6-b579-b2444cdbccb2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.450943 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c98e22-cfab-43d6-b579-b2444cdbccb2" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.451598 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.453799 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.454172 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.454212 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.454857 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.455161 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.468504 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5"] Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.633091 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ba61c605-6989-4a66-bf9d-31f536162568-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.633244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.633299 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.633377 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.633496 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8m4g\" (UniqueName: \"kubernetes.io/projected/ba61c605-6989-4a66-bf9d-31f536162568-kube-api-access-z8m4g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.735358 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.735423 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.735497 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.735602 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8m4g\" (UniqueName: \"kubernetes.io/projected/ba61c605-6989-4a66-bf9d-31f536162568-kube-api-access-z8m4g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.735638 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ba61c605-6989-4a66-bf9d-31f536162568-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.736769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ba61c605-6989-4a66-bf9d-31f536162568-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.739547 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.740389 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.745438 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.756071 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8m4g\" (UniqueName: \"kubernetes.io/projected/ba61c605-6989-4a66-bf9d-31f536162568-kube-api-access-z8m4g\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qlzx5\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:24 crc kubenswrapper[4858]: I1205 14:31:24.770310 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:31:25 crc kubenswrapper[4858]: I1205 14:31:25.314315 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5"] Dec 05 14:31:25 crc kubenswrapper[4858]: I1205 14:31:25.336260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" event={"ID":"ba61c605-6989-4a66-bf9d-31f536162568","Type":"ContainerStarted","Data":"a738126b5ae19795e1fbd83af9bc2a14f173c0c0259cf252c615466e76d36422"} Dec 05 14:31:26 crc kubenswrapper[4858]: I1205 14:31:26.348780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" event={"ID":"ba61c605-6989-4a66-bf9d-31f536162568","Type":"ContainerStarted","Data":"ced48e1090036710f44c0f77d9772931c39ab2fc4c0d91ca03d196ab84daee4f"} Dec 05 14:31:26 crc kubenswrapper[4858]: I1205 14:31:26.372121 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" podStartSLOduration=1.975113607 podStartE2EDuration="2.37210344s" podCreationTimestamp="2025-12-05 14:31:24 +0000 UTC" firstStartedPulling="2025-12-05 14:31:25.317422918 +0000 UTC m=+2093.865021057" lastFinishedPulling="2025-12-05 14:31:25.714412751 +0000 UTC m=+2094.262010890" observedRunningTime="2025-12-05 14:31:26.367229062 +0000 UTC m=+2094.914827241" watchObservedRunningTime="2025-12-05 14:31:26.37210344 +0000 UTC m=+2094.919701579" Dec 05 14:32:14 crc kubenswrapper[4858]: I1205 14:32:14.759803 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:32:14 crc kubenswrapper[4858]: I1205 14:32:14.760338 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:32:41 crc kubenswrapper[4858]: I1205 14:32:41.017531 4858 generic.go:334] "Generic (PLEG): container finished" podID="ba61c605-6989-4a66-bf9d-31f536162568" containerID="ced48e1090036710f44c0f77d9772931c39ab2fc4c0d91ca03d196ab84daee4f" exitCode=0 Dec 05 14:32:41 crc kubenswrapper[4858]: I1205 14:32:41.018106 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" event={"ID":"ba61c605-6989-4a66-bf9d-31f536162568","Type":"ContainerDied","Data":"ced48e1090036710f44c0f77d9772931c39ab2fc4c0d91ca03d196ab84daee4f"} Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.402447 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.444580 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-inventory\") pod \"ba61c605-6989-4a66-bf9d-31f536162568\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.444712 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8m4g\" (UniqueName: \"kubernetes.io/projected/ba61c605-6989-4a66-bf9d-31f536162568-kube-api-access-z8m4g\") pod \"ba61c605-6989-4a66-bf9d-31f536162568\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.444800 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ba61c605-6989-4a66-bf9d-31f536162568-ovncontroller-config-0\") pod \"ba61c605-6989-4a66-bf9d-31f536162568\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.444881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ssh-key\") pod \"ba61c605-6989-4a66-bf9d-31f536162568\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.445015 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ovn-combined-ca-bundle\") pod \"ba61c605-6989-4a66-bf9d-31f536162568\" (UID: \"ba61c605-6989-4a66-bf9d-31f536162568\") " Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.451239 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ba61c605-6989-4a66-bf9d-31f536162568" (UID: "ba61c605-6989-4a66-bf9d-31f536162568"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.452110 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba61c605-6989-4a66-bf9d-31f536162568-kube-api-access-z8m4g" (OuterVolumeSpecName: "kube-api-access-z8m4g") pod "ba61c605-6989-4a66-bf9d-31f536162568" (UID: "ba61c605-6989-4a66-bf9d-31f536162568"). InnerVolumeSpecName "kube-api-access-z8m4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.472679 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba61c605-6989-4a66-bf9d-31f536162568-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ba61c605-6989-4a66-bf9d-31f536162568" (UID: "ba61c605-6989-4a66-bf9d-31f536162568"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.482639 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba61c605-6989-4a66-bf9d-31f536162568" (UID: "ba61c605-6989-4a66-bf9d-31f536162568"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.486177 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-inventory" (OuterVolumeSpecName: "inventory") pod "ba61c605-6989-4a66-bf9d-31f536162568" (UID: "ba61c605-6989-4a66-bf9d-31f536162568"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.546828 4858 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.546887 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.546904 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8m4g\" (UniqueName: \"kubernetes.io/projected/ba61c605-6989-4a66-bf9d-31f536162568-kube-api-access-z8m4g\") on node \"crc\" DevicePath \"\"" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.546917 4858 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ba61c605-6989-4a66-bf9d-31f536162568-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:32:42 crc kubenswrapper[4858]: I1205 14:32:42.546927 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba61c605-6989-4a66-bf9d-31f536162568-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.036940 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.036868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qlzx5" event={"ID":"ba61c605-6989-4a66-bf9d-31f536162568","Type":"ContainerDied","Data":"a738126b5ae19795e1fbd83af9bc2a14f173c0c0259cf252c615466e76d36422"} Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.037652 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a738126b5ae19795e1fbd83af9bc2a14f173c0c0259cf252c615466e76d36422" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.164956 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk"] Dec 05 14:32:43 crc kubenswrapper[4858]: E1205 14:32:43.165339 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba61c605-6989-4a66-bf9d-31f536162568" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.165355 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba61c605-6989-4a66-bf9d-31f536162568" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.165537 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba61c605-6989-4a66-bf9d-31f536162568" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.166276 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.169853 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.169884 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.173683 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.173923 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.174061 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.174232 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.188134 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk"] Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.261744 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zmbf\" (UniqueName: \"kubernetes.io/projected/4cdf2cd0-7712-42af-8481-4d3789977f39-kube-api-access-2zmbf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.261849 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.261878 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.261916 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.261938 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.262440 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.363972 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.364030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zmbf\" (UniqueName: \"kubernetes.io/projected/4cdf2cd0-7712-42af-8481-4d3789977f39-kube-api-access-2zmbf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.364055 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.364080 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.364116 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.364141 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.368337 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.369047 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.375685 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.379686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.382795 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.386538 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zmbf\" (UniqueName: \"kubernetes.io/projected/4cdf2cd0-7712-42af-8481-4d3789977f39-kube-api-access-2zmbf\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:43 crc kubenswrapper[4858]: I1205 14:32:43.502793 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:32:44 crc kubenswrapper[4858]: I1205 14:32:44.074133 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk"] Dec 05 14:32:44 crc kubenswrapper[4858]: I1205 14:32:44.759840 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:32:44 crc kubenswrapper[4858]: I1205 14:32:44.760066 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:32:45 crc kubenswrapper[4858]: I1205 14:32:45.072368 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" event={"ID":"4cdf2cd0-7712-42af-8481-4d3789977f39","Type":"ContainerStarted","Data":"47eb9579c6e9fc72bc1e9c92803a8b19ed4a35717508823a18b40e28a5526179"} Dec 05 14:32:45 crc kubenswrapper[4858]: I1205 14:32:45.072723 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" event={"ID":"4cdf2cd0-7712-42af-8481-4d3789977f39","Type":"ContainerStarted","Data":"a9eab7cda9ca80bba8c4ce12229f50ba590cb42562afdf812fca21e32688ee74"} Dec 05 14:32:45 crc kubenswrapper[4858]: I1205 14:32:45.102628 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" podStartSLOduration=1.580311602 podStartE2EDuration="2.102611734s" podCreationTimestamp="2025-12-05 14:32:43 +0000 UTC" firstStartedPulling="2025-12-05 14:32:44.058462293 +0000 UTC m=+2172.606060432" lastFinishedPulling="2025-12-05 14:32:44.580762435 +0000 UTC m=+2173.128360564" observedRunningTime="2025-12-05 14:32:45.0940593 +0000 UTC m=+2173.641657439" watchObservedRunningTime="2025-12-05 14:32:45.102611734 +0000 UTC m=+2173.650209873" Dec 05 14:33:14 crc kubenswrapper[4858]: I1205 14:33:14.760529 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:33:14 crc kubenswrapper[4858]: I1205 14:33:14.761026 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:33:14 crc kubenswrapper[4858]: I1205 14:33:14.761075 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:33:14 crc kubenswrapper[4858]: I1205 14:33:14.761924 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb7be8b253883532e4d55d9c3cb45201cb840b781ba34bd639f96f87bb561d52"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:33:14 crc kubenswrapper[4858]: I1205 14:33:14.761986 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://eb7be8b253883532e4d55d9c3cb45201cb840b781ba34bd639f96f87bb561d52" gracePeriod=600 Dec 05 14:33:15 crc kubenswrapper[4858]: I1205 14:33:15.349801 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="eb7be8b253883532e4d55d9c3cb45201cb840b781ba34bd639f96f87bb561d52" exitCode=0 Dec 05 14:33:15 crc kubenswrapper[4858]: I1205 14:33:15.349859 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"eb7be8b253883532e4d55d9c3cb45201cb840b781ba34bd639f96f87bb561d52"} Dec 05 14:33:15 crc kubenswrapper[4858]: I1205 14:33:15.350323 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96"} Dec 05 14:33:15 crc kubenswrapper[4858]: I1205 14:33:15.350338 4858 scope.go:117] "RemoveContainer" containerID="13942123c1c0868fe460d44f646c3dd5c7da78a3f18ff5699d05b14dd20caf65" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.354847 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9bpzj"] Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.359910 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.388636 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bpzj"] Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.423855 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2d7k\" (UniqueName: \"kubernetes.io/projected/5792b90a-3fda-48e3-b83c-fbc77906b978-kube-api-access-z2d7k\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.423943 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-utilities\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.423983 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-catalog-content\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.525778 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2d7k\" (UniqueName: \"kubernetes.io/projected/5792b90a-3fda-48e3-b83c-fbc77906b978-kube-api-access-z2d7k\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.525867 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-utilities\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.525893 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-catalog-content\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.526478 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-catalog-content\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.526528 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-utilities\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.559564 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2d7k\" (UniqueName: \"kubernetes.io/projected/5792b90a-3fda-48e3-b83c-fbc77906b978-kube-api-access-z2d7k\") pod \"redhat-marketplace-9bpzj\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:31 crc kubenswrapper[4858]: I1205 14:33:31.688257 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:32 crc kubenswrapper[4858]: I1205 14:33:32.236342 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bpzj"] Dec 05 14:33:32 crc kubenswrapper[4858]: I1205 14:33:32.505424 4858 generic.go:334] "Generic (PLEG): container finished" podID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerID="e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972" exitCode=0 Dec 05 14:33:32 crc kubenswrapper[4858]: I1205 14:33:32.505656 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerDied","Data":"e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972"} Dec 05 14:33:32 crc kubenswrapper[4858]: I1205 14:33:32.505717 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerStarted","Data":"60b6d58a459575c84422b22fba425db66017438028c04b793bc5c14eb5efbebb"} Dec 05 14:33:32 crc kubenswrapper[4858]: I1205 14:33:32.507752 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:33:33 crc kubenswrapper[4858]: I1205 14:33:33.519094 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerStarted","Data":"db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e"} Dec 05 14:33:34 crc kubenswrapper[4858]: I1205 14:33:34.546910 4858 generic.go:334] "Generic (PLEG): container finished" podID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerID="db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e" exitCode=0 Dec 05 14:33:34 crc kubenswrapper[4858]: I1205 14:33:34.548453 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerDied","Data":"db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e"} Dec 05 14:33:35 crc kubenswrapper[4858]: I1205 14:33:35.558053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerStarted","Data":"a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431"} Dec 05 14:33:35 crc kubenswrapper[4858]: I1205 14:33:35.599411 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9bpzj" podStartSLOduration=2.157156505 podStartE2EDuration="4.599368577s" podCreationTimestamp="2025-12-05 14:33:31 +0000 UTC" firstStartedPulling="2025-12-05 14:33:32.50751088 +0000 UTC m=+2221.055109019" lastFinishedPulling="2025-12-05 14:33:34.949722952 +0000 UTC m=+2223.497321091" observedRunningTime="2025-12-05 14:33:35.590685092 +0000 UTC m=+2224.138283251" watchObservedRunningTime="2025-12-05 14:33:35.599368577 +0000 UTC m=+2224.146966716" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.724515 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-prpgq"] Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.727299 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.747221 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prpgq"] Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.893008 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-catalog-content\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.893197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-utilities\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.893258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/5328b5fe-9532-428f-9ddc-f1443c1101af-kube-api-access-g5fz9\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.995325 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-utilities\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.995419 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/5328b5fe-9532-428f-9ddc-f1443c1101af-kube-api-access-g5fz9\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.995492 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-catalog-content\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.995917 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-utilities\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:38 crc kubenswrapper[4858]: I1205 14:33:38.996323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-catalog-content\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:39 crc kubenswrapper[4858]: I1205 14:33:39.013982 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/5328b5fe-9532-428f-9ddc-f1443c1101af-kube-api-access-g5fz9\") pod \"community-operators-prpgq\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:39 crc kubenswrapper[4858]: I1205 14:33:39.051142 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:39 crc kubenswrapper[4858]: I1205 14:33:39.623136 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-prpgq"] Dec 05 14:33:39 crc kubenswrapper[4858]: W1205 14:33:39.628208 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5328b5fe_9532_428f_9ddc_f1443c1101af.slice/crio-d8095a5f7acc14d4560954fa4a381d1238588aa81490a99d60d65b66d32cb506 WatchSource:0}: Error finding container d8095a5f7acc14d4560954fa4a381d1238588aa81490a99d60d65b66d32cb506: Status 404 returned error can't find the container with id d8095a5f7acc14d4560954fa4a381d1238588aa81490a99d60d65b66d32cb506 Dec 05 14:33:40 crc kubenswrapper[4858]: I1205 14:33:40.599242 4858 generic.go:334] "Generic (PLEG): container finished" podID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerID="be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4" exitCode=0 Dec 05 14:33:40 crc kubenswrapper[4858]: I1205 14:33:40.599347 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerDied","Data":"be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4"} Dec 05 14:33:40 crc kubenswrapper[4858]: I1205 14:33:40.599585 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerStarted","Data":"d8095a5f7acc14d4560954fa4a381d1238588aa81490a99d60d65b66d32cb506"} Dec 05 14:33:40 crc kubenswrapper[4858]: I1205 14:33:40.602400 4858 generic.go:334] "Generic (PLEG): container finished" podID="4cdf2cd0-7712-42af-8481-4d3789977f39" containerID="47eb9579c6e9fc72bc1e9c92803a8b19ed4a35717508823a18b40e28a5526179" exitCode=0 Dec 05 14:33:40 crc kubenswrapper[4858]: I1205 14:33:40.602424 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" event={"ID":"4cdf2cd0-7712-42af-8481-4d3789977f39","Type":"ContainerDied","Data":"47eb9579c6e9fc72bc1e9c92803a8b19ed4a35717508823a18b40e28a5526179"} Dec 05 14:33:41 crc kubenswrapper[4858]: I1205 14:33:41.614379 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerStarted","Data":"d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c"} Dec 05 14:33:41 crc kubenswrapper[4858]: I1205 14:33:41.688796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:41 crc kubenswrapper[4858]: I1205 14:33:41.688876 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:41 crc kubenswrapper[4858]: I1205 14:33:41.743464 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.299286 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.361640 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-nova-metadata-neutron-config-0\") pod \"4cdf2cd0-7712-42af-8481-4d3789977f39\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.361707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4cdf2cd0-7712-42af-8481-4d3789977f39\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.361767 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-inventory\") pod \"4cdf2cd0-7712-42af-8481-4d3789977f39\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.361871 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zmbf\" (UniqueName: \"kubernetes.io/projected/4cdf2cd0-7712-42af-8481-4d3789977f39-kube-api-access-2zmbf\") pod \"4cdf2cd0-7712-42af-8481-4d3789977f39\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.361934 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-ssh-key\") pod \"4cdf2cd0-7712-42af-8481-4d3789977f39\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.361988 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-metadata-combined-ca-bundle\") pod \"4cdf2cd0-7712-42af-8481-4d3789977f39\" (UID: \"4cdf2cd0-7712-42af-8481-4d3789977f39\") " Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.381861 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4cdf2cd0-7712-42af-8481-4d3789977f39" (UID: "4cdf2cd0-7712-42af-8481-4d3789977f39"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.384155 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdf2cd0-7712-42af-8481-4d3789977f39-kube-api-access-2zmbf" (OuterVolumeSpecName: "kube-api-access-2zmbf") pod "4cdf2cd0-7712-42af-8481-4d3789977f39" (UID: "4cdf2cd0-7712-42af-8481-4d3789977f39"). InnerVolumeSpecName "kube-api-access-2zmbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.398569 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4cdf2cd0-7712-42af-8481-4d3789977f39" (UID: "4cdf2cd0-7712-42af-8481-4d3789977f39"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.415156 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4cdf2cd0-7712-42af-8481-4d3789977f39" (UID: "4cdf2cd0-7712-42af-8481-4d3789977f39"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.423675 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-inventory" (OuterVolumeSpecName: "inventory") pod "4cdf2cd0-7712-42af-8481-4d3789977f39" (UID: "4cdf2cd0-7712-42af-8481-4d3789977f39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.428697 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4cdf2cd0-7712-42af-8481-4d3789977f39" (UID: "4cdf2cd0-7712-42af-8481-4d3789977f39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.464355 4858 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.464388 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.464401 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.464409 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zmbf\" (UniqueName: \"kubernetes.io/projected/4cdf2cd0-7712-42af-8481-4d3789977f39-kube-api-access-2zmbf\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.464418 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.464427 4858 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cdf2cd0-7712-42af-8481-4d3789977f39-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.625038 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" event={"ID":"4cdf2cd0-7712-42af-8481-4d3789977f39","Type":"ContainerDied","Data":"a9eab7cda9ca80bba8c4ce12229f50ba590cb42562afdf812fca21e32688ee74"} Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.625082 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9eab7cda9ca80bba8c4ce12229f50ba590cb42562afdf812fca21e32688ee74" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.625150 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m5xmk" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.632952 4858 generic.go:334] "Generic (PLEG): container finished" podID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerID="d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c" exitCode=0 Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.634985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerDied","Data":"d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c"} Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.765998 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.818283 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv"] Dec 05 14:33:42 crc kubenswrapper[4858]: E1205 14:33:42.818713 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdf2cd0-7712-42af-8481-4d3789977f39" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.818730 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdf2cd0-7712-42af-8481-4d3789977f39" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.818945 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdf2cd0-7712-42af-8481-4d3789977f39" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.819567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.830182 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.830284 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.835932 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.836092 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.836147 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.839132 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv"] Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.869951 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.870017 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.870126 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cpn\" (UniqueName: \"kubernetes.io/projected/d474385c-0b18-4b0a-90b2-3ce49a444227-kube-api-access-x7cpn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.870155 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.870194 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.971601 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.973145 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7cpn\" (UniqueName: \"kubernetes.io/projected/d474385c-0b18-4b0a-90b2-3ce49a444227-kube-api-access-x7cpn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.973193 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.973245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.973357 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.977407 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.977655 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.978362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.989379 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:42 crc kubenswrapper[4858]: I1205 14:33:42.994857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7cpn\" (UniqueName: \"kubernetes.io/projected/d474385c-0b18-4b0a-90b2-3ce49a444227-kube-api-access-x7cpn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:43 crc kubenswrapper[4858]: I1205 14:33:43.141663 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:33:43 crc kubenswrapper[4858]: I1205 14:33:43.643124 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerStarted","Data":"8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8"} Dec 05 14:33:43 crc kubenswrapper[4858]: I1205 14:33:43.675370 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-prpgq" podStartSLOduration=3.215622019 podStartE2EDuration="5.675349185s" podCreationTimestamp="2025-12-05 14:33:38 +0000 UTC" firstStartedPulling="2025-12-05 14:33:40.601056372 +0000 UTC m=+2229.148654511" lastFinishedPulling="2025-12-05 14:33:43.060783538 +0000 UTC m=+2231.608381677" observedRunningTime="2025-12-05 14:33:43.664275445 +0000 UTC m=+2232.211873604" watchObservedRunningTime="2025-12-05 14:33:43.675349185 +0000 UTC m=+2232.222947324" Dec 05 14:33:43 crc kubenswrapper[4858]: I1205 14:33:43.718892 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv"] Dec 05 14:33:43 crc kubenswrapper[4858]: W1205 14:33:43.728008 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd474385c_0b18_4b0a_90b2_3ce49a444227.slice/crio-e478b4ec6c7351370abbb1033a4a05073486136b63e09eb61a87db16bdb675fa WatchSource:0}: Error finding container e478b4ec6c7351370abbb1033a4a05073486136b63e09eb61a87db16bdb675fa: Status 404 returned error can't find the container with id e478b4ec6c7351370abbb1033a4a05073486136b63e09eb61a87db16bdb675fa Dec 05 14:33:44 crc kubenswrapper[4858]: I1205 14:33:44.114764 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bpzj"] Dec 05 14:33:44 crc kubenswrapper[4858]: I1205 14:33:44.656783 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9bpzj" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="registry-server" containerID="cri-o://a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431" gracePeriod=2 Dec 05 14:33:44 crc kubenswrapper[4858]: I1205 14:33:44.658322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" event={"ID":"d474385c-0b18-4b0a-90b2-3ce49a444227","Type":"ContainerStarted","Data":"ed5546c08e6d3050d29fb02271aa318195308c978349b1b37e3d84d1d824e71f"} Dec 05 14:33:44 crc kubenswrapper[4858]: I1205 14:33:44.658359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" event={"ID":"d474385c-0b18-4b0a-90b2-3ce49a444227","Type":"ContainerStarted","Data":"e478b4ec6c7351370abbb1033a4a05073486136b63e09eb61a87db16bdb675fa"} Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.211453 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.234364 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" podStartSLOduration=2.716682991 podStartE2EDuration="3.234345117s" podCreationTimestamp="2025-12-05 14:33:42 +0000 UTC" firstStartedPulling="2025-12-05 14:33:43.732011017 +0000 UTC m=+2232.279609156" lastFinishedPulling="2025-12-05 14:33:44.249673143 +0000 UTC m=+2232.797271282" observedRunningTime="2025-12-05 14:33:44.686567316 +0000 UTC m=+2233.234165455" watchObservedRunningTime="2025-12-05 14:33:45.234345117 +0000 UTC m=+2233.781943256" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.318519 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2d7k\" (UniqueName: \"kubernetes.io/projected/5792b90a-3fda-48e3-b83c-fbc77906b978-kube-api-access-z2d7k\") pod \"5792b90a-3fda-48e3-b83c-fbc77906b978\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.318574 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-catalog-content\") pod \"5792b90a-3fda-48e3-b83c-fbc77906b978\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.318595 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-utilities\") pod \"5792b90a-3fda-48e3-b83c-fbc77906b978\" (UID: \"5792b90a-3fda-48e3-b83c-fbc77906b978\") " Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.319445 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-utilities" (OuterVolumeSpecName: "utilities") pod "5792b90a-3fda-48e3-b83c-fbc77906b978" (UID: "5792b90a-3fda-48e3-b83c-fbc77906b978"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.323560 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5792b90a-3fda-48e3-b83c-fbc77906b978-kube-api-access-z2d7k" (OuterVolumeSpecName: "kube-api-access-z2d7k") pod "5792b90a-3fda-48e3-b83c-fbc77906b978" (UID: "5792b90a-3fda-48e3-b83c-fbc77906b978"). InnerVolumeSpecName "kube-api-access-z2d7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.336460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5792b90a-3fda-48e3-b83c-fbc77906b978" (UID: "5792b90a-3fda-48e3-b83c-fbc77906b978"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.421381 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2d7k\" (UniqueName: \"kubernetes.io/projected/5792b90a-3fda-48e3-b83c-fbc77906b978-kube-api-access-z2d7k\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.421411 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.421420 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5792b90a-3fda-48e3-b83c-fbc77906b978-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.668698 4858 generic.go:334] "Generic (PLEG): container finished" podID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerID="a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431" exitCode=0 Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.668732 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerDied","Data":"a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431"} Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.668764 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bpzj" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.668783 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bpzj" event={"ID":"5792b90a-3fda-48e3-b83c-fbc77906b978","Type":"ContainerDied","Data":"60b6d58a459575c84422b22fba425db66017438028c04b793bc5c14eb5efbebb"} Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.668803 4858 scope.go:117] "RemoveContainer" containerID="a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.694739 4858 scope.go:117] "RemoveContainer" containerID="db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.709176 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bpzj"] Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.715779 4858 scope.go:117] "RemoveContainer" containerID="e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.725368 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bpzj"] Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.785293 4858 scope.go:117] "RemoveContainer" containerID="a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431" Dec 05 14:33:45 crc kubenswrapper[4858]: E1205 14:33:45.786238 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431\": container with ID starting with a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431 not found: ID does not exist" containerID="a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.786312 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431"} err="failed to get container status \"a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431\": rpc error: code = NotFound desc = could not find container \"a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431\": container with ID starting with a43029a5be0a1311ab44348ee34900c87cd23a783577b207fd789ce91269a431 not found: ID does not exist" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.786341 4858 scope.go:117] "RemoveContainer" containerID="db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e" Dec 05 14:33:45 crc kubenswrapper[4858]: E1205 14:33:45.787425 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e\": container with ID starting with db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e not found: ID does not exist" containerID="db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.787469 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e"} err="failed to get container status \"db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e\": rpc error: code = NotFound desc = could not find container \"db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e\": container with ID starting with db6dd6a472621dbef877b6bd891b144d6d1d99498da0906129d176e19e39bf8e not found: ID does not exist" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.787549 4858 scope.go:117] "RemoveContainer" containerID="e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972" Dec 05 14:33:45 crc kubenswrapper[4858]: E1205 14:33:45.788029 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972\": container with ID starting with e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972 not found: ID does not exist" containerID="e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.788054 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972"} err="failed to get container status \"e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972\": rpc error: code = NotFound desc = could not find container \"e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972\": container with ID starting with e57435ad923d90e3172388ea38b5c3b2508b8a9662d539bb3f386c11f7e6c972 not found: ID does not exist" Dec 05 14:33:45 crc kubenswrapper[4858]: I1205 14:33:45.914300 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" path="/var/lib/kubelet/pods/5792b90a-3fda-48e3-b83c-fbc77906b978/volumes" Dec 05 14:33:49 crc kubenswrapper[4858]: I1205 14:33:49.052836 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:49 crc kubenswrapper[4858]: I1205 14:33:49.053141 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:49 crc kubenswrapper[4858]: I1205 14:33:49.098746 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:49 crc kubenswrapper[4858]: I1205 14:33:49.748027 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:50 crc kubenswrapper[4858]: I1205 14:33:50.113704 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prpgq"] Dec 05 14:33:51 crc kubenswrapper[4858]: I1205 14:33:51.721450 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-prpgq" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="registry-server" containerID="cri-o://8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8" gracePeriod=2 Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.187846 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.351801 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-catalog-content\") pod \"5328b5fe-9532-428f-9ddc-f1443c1101af\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.351905 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-utilities\") pod \"5328b5fe-9532-428f-9ddc-f1443c1101af\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.351968 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/5328b5fe-9532-428f-9ddc-f1443c1101af-kube-api-access-g5fz9\") pod \"5328b5fe-9532-428f-9ddc-f1443c1101af\" (UID: \"5328b5fe-9532-428f-9ddc-f1443c1101af\") " Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.353130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-utilities" (OuterVolumeSpecName: "utilities") pod "5328b5fe-9532-428f-9ddc-f1443c1101af" (UID: "5328b5fe-9532-428f-9ddc-f1443c1101af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.366056 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5328b5fe-9532-428f-9ddc-f1443c1101af-kube-api-access-g5fz9" (OuterVolumeSpecName: "kube-api-access-g5fz9") pod "5328b5fe-9532-428f-9ddc-f1443c1101af" (UID: "5328b5fe-9532-428f-9ddc-f1443c1101af"). InnerVolumeSpecName "kube-api-access-g5fz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.399321 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5328b5fe-9532-428f-9ddc-f1443c1101af" (UID: "5328b5fe-9532-428f-9ddc-f1443c1101af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.454864 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/5328b5fe-9532-428f-9ddc-f1443c1101af-kube-api-access-g5fz9\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.454894 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.454905 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5328b5fe-9532-428f-9ddc-f1443c1101af-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.731577 4858 generic.go:334] "Generic (PLEG): container finished" podID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerID="8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8" exitCode=0 Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.731621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerDied","Data":"8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8"} Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.731648 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-prpgq" event={"ID":"5328b5fe-9532-428f-9ddc-f1443c1101af","Type":"ContainerDied","Data":"d8095a5f7acc14d4560954fa4a381d1238588aa81490a99d60d65b66d32cb506"} Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.731650 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-prpgq" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.731666 4858 scope.go:117] "RemoveContainer" containerID="8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.751061 4858 scope.go:117] "RemoveContainer" containerID="d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.772474 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-prpgq"] Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.778083 4858 scope.go:117] "RemoveContainer" containerID="be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.781401 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-prpgq"] Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.815871 4858 scope.go:117] "RemoveContainer" containerID="8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8" Dec 05 14:33:52 crc kubenswrapper[4858]: E1205 14:33:52.816425 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8\": container with ID starting with 8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8 not found: ID does not exist" containerID="8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.816468 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8"} err="failed to get container status \"8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8\": rpc error: code = NotFound desc = could not find container \"8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8\": container with ID starting with 8432330f29652d45353c2e884a18d5b8219b26a45a3e99bd8e44deff9f327ef8 not found: ID does not exist" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.816499 4858 scope.go:117] "RemoveContainer" containerID="d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c" Dec 05 14:33:52 crc kubenswrapper[4858]: E1205 14:33:52.816804 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c\": container with ID starting with d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c not found: ID does not exist" containerID="d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.816904 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c"} err="failed to get container status \"d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c\": rpc error: code = NotFound desc = could not find container \"d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c\": container with ID starting with d85757d90ba0db865458432ac7ccbe74029a316092b96b3c11934945998aba3c not found: ID does not exist" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.816980 4858 scope.go:117] "RemoveContainer" containerID="be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4" Dec 05 14:33:52 crc kubenswrapper[4858]: E1205 14:33:52.817266 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4\": container with ID starting with be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4 not found: ID does not exist" containerID="be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4" Dec 05 14:33:52 crc kubenswrapper[4858]: I1205 14:33:52.817306 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4"} err="failed to get container status \"be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4\": rpc error: code = NotFound desc = could not find container \"be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4\": container with ID starting with be0b4f19d296c7ac10e663fcab1c2a4bfb7f550e7cd9acfde927a927371284d4 not found: ID does not exist" Dec 05 14:33:53 crc kubenswrapper[4858]: I1205 14:33:53.912425 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" path="/var/lib/kubelet/pods/5328b5fe-9532-428f-9ddc-f1443c1101af/volumes" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.026253 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bqs7d"] Dec 05 14:35:18 crc kubenswrapper[4858]: E1205 14:35:18.027044 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="extract-utilities" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027055 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="extract-utilities" Dec 05 14:35:18 crc kubenswrapper[4858]: E1205 14:35:18.027070 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="extract-utilities" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027077 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="extract-utilities" Dec 05 14:35:18 crc kubenswrapper[4858]: E1205 14:35:18.027096 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="extract-content" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027104 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="extract-content" Dec 05 14:35:18 crc kubenswrapper[4858]: E1205 14:35:18.027119 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="registry-server" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027125 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="registry-server" Dec 05 14:35:18 crc kubenswrapper[4858]: E1205 14:35:18.027136 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="extract-content" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027142 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="extract-content" Dec 05 14:35:18 crc kubenswrapper[4858]: E1205 14:35:18.027151 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="registry-server" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027156 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="registry-server" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027333 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5328b5fe-9532-428f-9ddc-f1443c1101af" containerName="registry-server" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.027345 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="5792b90a-3fda-48e3-b83c-fbc77906b978" containerName="registry-server" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.028635 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.048133 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqs7d"] Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.193791 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-catalog-content\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.194347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-utilities\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.194445 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l8mg\" (UniqueName: \"kubernetes.io/projected/9ebbd979-502f-4378-97e1-481f21fd30c7-kube-api-access-4l8mg\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.297175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-catalog-content\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.297863 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-catalog-content\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.298383 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-utilities\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.298500 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l8mg\" (UniqueName: \"kubernetes.io/projected/9ebbd979-502f-4378-97e1-481f21fd30c7-kube-api-access-4l8mg\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.298747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-utilities\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.324925 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l8mg\" (UniqueName: \"kubernetes.io/projected/9ebbd979-502f-4378-97e1-481f21fd30c7-kube-api-access-4l8mg\") pod \"redhat-operators-bqs7d\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.366636 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:18 crc kubenswrapper[4858]: I1205 14:35:18.856869 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bqs7d"] Dec 05 14:35:19 crc kubenswrapper[4858]: I1205 14:35:19.450130 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerID="f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d" exitCode=0 Dec 05 14:35:19 crc kubenswrapper[4858]: I1205 14:35:19.450199 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerDied","Data":"f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d"} Dec 05 14:35:19 crc kubenswrapper[4858]: I1205 14:35:19.450435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerStarted","Data":"dbf56b23aaef6be192c9b23e370f7b673acd8b8260ea7a3aa9befacc16e3e89f"} Dec 05 14:35:20 crc kubenswrapper[4858]: I1205 14:35:20.461011 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerStarted","Data":"0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd"} Dec 05 14:35:23 crc kubenswrapper[4858]: I1205 14:35:23.490794 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerID="0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd" exitCode=0 Dec 05 14:35:23 crc kubenswrapper[4858]: I1205 14:35:23.490924 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerDied","Data":"0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd"} Dec 05 14:35:24 crc kubenswrapper[4858]: I1205 14:35:24.503795 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerStarted","Data":"60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b"} Dec 05 14:35:24 crc kubenswrapper[4858]: I1205 14:35:24.524139 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bqs7d" podStartSLOduration=2.102142127 podStartE2EDuration="6.524118618s" podCreationTimestamp="2025-12-05 14:35:18 +0000 UTC" firstStartedPulling="2025-12-05 14:35:19.451889065 +0000 UTC m=+2327.999487204" lastFinishedPulling="2025-12-05 14:35:23.873865556 +0000 UTC m=+2332.421463695" observedRunningTime="2025-12-05 14:35:24.520922341 +0000 UTC m=+2333.068520480" watchObservedRunningTime="2025-12-05 14:35:24.524118618 +0000 UTC m=+2333.071716757" Dec 05 14:35:28 crc kubenswrapper[4858]: I1205 14:35:28.367459 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:28 crc kubenswrapper[4858]: I1205 14:35:28.367781 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:29 crc kubenswrapper[4858]: I1205 14:35:29.423278 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bqs7d" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="registry-server" probeResult="failure" output=< Dec 05 14:35:29 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:35:29 crc kubenswrapper[4858]: > Dec 05 14:35:38 crc kubenswrapper[4858]: I1205 14:35:38.414260 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:38 crc kubenswrapper[4858]: I1205 14:35:38.471957 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:38 crc kubenswrapper[4858]: I1205 14:35:38.650708 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqs7d"] Dec 05 14:35:39 crc kubenswrapper[4858]: I1205 14:35:39.677463 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bqs7d" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="registry-server" containerID="cri-o://60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b" gracePeriod=2 Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.117103 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.145949 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-catalog-content\") pod \"9ebbd979-502f-4378-97e1-481f21fd30c7\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.146041 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-utilities\") pod \"9ebbd979-502f-4378-97e1-481f21fd30c7\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.146088 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l8mg\" (UniqueName: \"kubernetes.io/projected/9ebbd979-502f-4378-97e1-481f21fd30c7-kube-api-access-4l8mg\") pod \"9ebbd979-502f-4378-97e1-481f21fd30c7\" (UID: \"9ebbd979-502f-4378-97e1-481f21fd30c7\") " Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.146947 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-utilities" (OuterVolumeSpecName: "utilities") pod "9ebbd979-502f-4378-97e1-481f21fd30c7" (UID: "9ebbd979-502f-4378-97e1-481f21fd30c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.184052 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ebbd979-502f-4378-97e1-481f21fd30c7-kube-api-access-4l8mg" (OuterVolumeSpecName: "kube-api-access-4l8mg") pod "9ebbd979-502f-4378-97e1-481f21fd30c7" (UID: "9ebbd979-502f-4378-97e1-481f21fd30c7"). InnerVolumeSpecName "kube-api-access-4l8mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.248892 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l8mg\" (UniqueName: \"kubernetes.io/projected/9ebbd979-502f-4378-97e1-481f21fd30c7-kube-api-access-4l8mg\") on node \"crc\" DevicePath \"\"" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.248934 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.259615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ebbd979-502f-4378-97e1-481f21fd30c7" (UID: "9ebbd979-502f-4378-97e1-481f21fd30c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.351133 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ebbd979-502f-4378-97e1-481f21fd30c7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.687558 4858 generic.go:334] "Generic (PLEG): container finished" podID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerID="60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b" exitCode=0 Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.687636 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bqs7d" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.687649 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerDied","Data":"60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b"} Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.689284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bqs7d" event={"ID":"9ebbd979-502f-4378-97e1-481f21fd30c7","Type":"ContainerDied","Data":"dbf56b23aaef6be192c9b23e370f7b673acd8b8260ea7a3aa9befacc16e3e89f"} Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.689343 4858 scope.go:117] "RemoveContainer" containerID="60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.716140 4858 scope.go:117] "RemoveContainer" containerID="0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.738463 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bqs7d"] Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.747106 4858 scope.go:117] "RemoveContainer" containerID="f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.747933 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bqs7d"] Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.818443 4858 scope.go:117] "RemoveContainer" containerID="60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b" Dec 05 14:35:40 crc kubenswrapper[4858]: E1205 14:35:40.819058 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b\": container with ID starting with 60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b not found: ID does not exist" containerID="60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.819106 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b"} err="failed to get container status \"60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b\": rpc error: code = NotFound desc = could not find container \"60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b\": container with ID starting with 60f39410a25b07ea4554d1364c54b5ddc88970e7efb6032cb75449b2ce70f36b not found: ID does not exist" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.819134 4858 scope.go:117] "RemoveContainer" containerID="0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd" Dec 05 14:35:40 crc kubenswrapper[4858]: E1205 14:35:40.819474 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd\": container with ID starting with 0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd not found: ID does not exist" containerID="0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.819510 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd"} err="failed to get container status \"0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd\": rpc error: code = NotFound desc = could not find container \"0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd\": container with ID starting with 0415f38fbb5c42e7b04ab6d6912edd94cf83e68d227658e6404e18853d05e7cd not found: ID does not exist" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.819525 4858 scope.go:117] "RemoveContainer" containerID="f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d" Dec 05 14:35:40 crc kubenswrapper[4858]: E1205 14:35:40.819740 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d\": container with ID starting with f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d not found: ID does not exist" containerID="f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d" Dec 05 14:35:40 crc kubenswrapper[4858]: I1205 14:35:40.819756 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d"} err="failed to get container status \"f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d\": rpc error: code = NotFound desc = could not find container \"f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d\": container with ID starting with f6150cf348d275c799c43ccbf19052718b501314b74a6d3c4060e9d6d5769f9d not found: ID does not exist" Dec 05 14:35:41 crc kubenswrapper[4858]: I1205 14:35:41.910965 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" path="/var/lib/kubelet/pods/9ebbd979-502f-4378-97e1-481f21fd30c7/volumes" Dec 05 14:35:44 crc kubenswrapper[4858]: I1205 14:35:44.759980 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:35:44 crc kubenswrapper[4858]: I1205 14:35:44.760325 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:36:14 crc kubenswrapper[4858]: I1205 14:36:14.759875 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:36:14 crc kubenswrapper[4858]: I1205 14:36:14.760599 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:36:44 crc kubenswrapper[4858]: I1205 14:36:44.760182 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:36:44 crc kubenswrapper[4858]: I1205 14:36:44.760658 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:36:44 crc kubenswrapper[4858]: I1205 14:36:44.760710 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:36:44 crc kubenswrapper[4858]: I1205 14:36:44.761417 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:36:44 crc kubenswrapper[4858]: I1205 14:36:44.761464 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" gracePeriod=600 Dec 05 14:36:44 crc kubenswrapper[4858]: E1205 14:36:44.888534 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:36:45 crc kubenswrapper[4858]: I1205 14:36:45.450725 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" exitCode=0 Dec 05 14:36:45 crc kubenswrapper[4858]: I1205 14:36:45.450762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96"} Dec 05 14:36:45 crc kubenswrapper[4858]: I1205 14:36:45.450794 4858 scope.go:117] "RemoveContainer" containerID="eb7be8b253883532e4d55d9c3cb45201cb840b781ba34bd639f96f87bb561d52" Dec 05 14:36:45 crc kubenswrapper[4858]: I1205 14:36:45.451444 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:36:45 crc kubenswrapper[4858]: E1205 14:36:45.451764 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:36:59 crc kubenswrapper[4858]: I1205 14:36:59.899877 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:36:59 crc kubenswrapper[4858]: E1205 14:36:59.900710 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:37:11 crc kubenswrapper[4858]: I1205 14:37:11.906605 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:37:11 crc kubenswrapper[4858]: E1205 14:37:11.907322 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:37:24 crc kubenswrapper[4858]: I1205 14:37:24.898955 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:37:24 crc kubenswrapper[4858]: E1205 14:37:24.899612 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:37:37 crc kubenswrapper[4858]: I1205 14:37:37.899632 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:37:37 crc kubenswrapper[4858]: E1205 14:37:37.900431 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:37:49 crc kubenswrapper[4858]: I1205 14:37:49.900150 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:37:49 crc kubenswrapper[4858]: E1205 14:37:49.900899 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:38:02 crc kubenswrapper[4858]: I1205 14:38:02.899913 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:38:02 crc kubenswrapper[4858]: E1205 14:38:02.900764 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.695092 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7lzzc"] Dec 05 14:38:14 crc kubenswrapper[4858]: E1205 14:38:14.696103 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="extract-content" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.696120 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="extract-content" Dec 05 14:38:14 crc kubenswrapper[4858]: E1205 14:38:14.696170 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="registry-server" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.696178 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="registry-server" Dec 05 14:38:14 crc kubenswrapper[4858]: E1205 14:38:14.696195 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="extract-utilities" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.696201 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="extract-utilities" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.696403 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ebbd979-502f-4378-97e1-481f21fd30c7" containerName="registry-server" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.698441 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.710747 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lzzc"] Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.786901 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-utilities\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.787027 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-catalog-content\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.787087 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv62\" (UniqueName: \"kubernetes.io/projected/c375524f-8972-48b3-a5c5-ce73e89fe43c-kube-api-access-4pv62\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.888112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pv62\" (UniqueName: \"kubernetes.io/projected/c375524f-8972-48b3-a5c5-ce73e89fe43c-kube-api-access-4pv62\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.888231 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-utilities\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.888338 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-catalog-content\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.888976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-catalog-content\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.889568 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-utilities\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:14 crc kubenswrapper[4858]: I1205 14:38:14.918935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pv62\" (UniqueName: \"kubernetes.io/projected/c375524f-8972-48b3-a5c5-ce73e89fe43c-kube-api-access-4pv62\") pod \"certified-operators-7lzzc\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:15 crc kubenswrapper[4858]: I1205 14:38:15.029042 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:15 crc kubenswrapper[4858]: I1205 14:38:15.551614 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7lzzc"] Dec 05 14:38:16 crc kubenswrapper[4858]: I1205 14:38:16.275112 4858 generic.go:334] "Generic (PLEG): container finished" podID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerID="7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb" exitCode=0 Dec 05 14:38:16 crc kubenswrapper[4858]: I1205 14:38:16.275280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerDied","Data":"7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb"} Dec 05 14:38:16 crc kubenswrapper[4858]: I1205 14:38:16.275489 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerStarted","Data":"a616ef61d857499114258f9101553f53e6e9393542c75a44576459e6c37f713a"} Dec 05 14:38:17 crc kubenswrapper[4858]: I1205 14:38:17.286402 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerStarted","Data":"6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284"} Dec 05 14:38:17 crc kubenswrapper[4858]: I1205 14:38:17.900191 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:38:17 crc kubenswrapper[4858]: E1205 14:38:17.900434 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:38:18 crc kubenswrapper[4858]: I1205 14:38:18.296413 4858 generic.go:334] "Generic (PLEG): container finished" podID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerID="6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284" exitCode=0 Dec 05 14:38:18 crc kubenswrapper[4858]: I1205 14:38:18.296753 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerDied","Data":"6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284"} Dec 05 14:38:19 crc kubenswrapper[4858]: I1205 14:38:19.308145 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerStarted","Data":"0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958"} Dec 05 14:38:19 crc kubenswrapper[4858]: I1205 14:38:19.326580 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7lzzc" podStartSLOduration=2.5487811799999998 podStartE2EDuration="5.326561743s" podCreationTimestamp="2025-12-05 14:38:14 +0000 UTC" firstStartedPulling="2025-12-05 14:38:16.276417223 +0000 UTC m=+2504.824015362" lastFinishedPulling="2025-12-05 14:38:19.054197786 +0000 UTC m=+2507.601795925" observedRunningTime="2025-12-05 14:38:19.325036013 +0000 UTC m=+2507.872634162" watchObservedRunningTime="2025-12-05 14:38:19.326561743 +0000 UTC m=+2507.874159882" Dec 05 14:38:25 crc kubenswrapper[4858]: I1205 14:38:25.029365 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:25 crc kubenswrapper[4858]: I1205 14:38:25.029899 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:25 crc kubenswrapper[4858]: I1205 14:38:25.078302 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:25 crc kubenswrapper[4858]: I1205 14:38:25.403458 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:25 crc kubenswrapper[4858]: I1205 14:38:25.477799 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lzzc"] Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.371269 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7lzzc" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="registry-server" containerID="cri-o://0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958" gracePeriod=2 Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.796247 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.955760 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pv62\" (UniqueName: \"kubernetes.io/projected/c375524f-8972-48b3-a5c5-ce73e89fe43c-kube-api-access-4pv62\") pod \"c375524f-8972-48b3-a5c5-ce73e89fe43c\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.956128 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-catalog-content\") pod \"c375524f-8972-48b3-a5c5-ce73e89fe43c\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.956299 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-utilities\") pod \"c375524f-8972-48b3-a5c5-ce73e89fe43c\" (UID: \"c375524f-8972-48b3-a5c5-ce73e89fe43c\") " Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.957030 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-utilities" (OuterVolumeSpecName: "utilities") pod "c375524f-8972-48b3-a5c5-ce73e89fe43c" (UID: "c375524f-8972-48b3-a5c5-ce73e89fe43c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.957641 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:27 crc kubenswrapper[4858]: I1205 14:38:27.968148 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c375524f-8972-48b3-a5c5-ce73e89fe43c-kube-api-access-4pv62" (OuterVolumeSpecName: "kube-api-access-4pv62") pod "c375524f-8972-48b3-a5c5-ce73e89fe43c" (UID: "c375524f-8972-48b3-a5c5-ce73e89fe43c"). InnerVolumeSpecName "kube-api-access-4pv62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.005062 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c375524f-8972-48b3-a5c5-ce73e89fe43c" (UID: "c375524f-8972-48b3-a5c5-ce73e89fe43c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.060134 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pv62\" (UniqueName: \"kubernetes.io/projected/c375524f-8972-48b3-a5c5-ce73e89fe43c-kube-api-access-4pv62\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.060173 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c375524f-8972-48b3-a5c5-ce73e89fe43c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.381964 4858 generic.go:334] "Generic (PLEG): container finished" podID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerID="0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958" exitCode=0 Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.382695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerDied","Data":"0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958"} Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.382787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7lzzc" event={"ID":"c375524f-8972-48b3-a5c5-ce73e89fe43c","Type":"ContainerDied","Data":"a616ef61d857499114258f9101553f53e6e9393542c75a44576459e6c37f713a"} Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.382908 4858 scope.go:117] "RemoveContainer" containerID="0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.383108 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7lzzc" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.412888 4858 scope.go:117] "RemoveContainer" containerID="6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.444994 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7lzzc"] Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.462326 4858 scope.go:117] "RemoveContainer" containerID="7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.465045 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7lzzc"] Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.484224 4858 scope.go:117] "RemoveContainer" containerID="0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958" Dec 05 14:38:28 crc kubenswrapper[4858]: E1205 14:38:28.484643 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958\": container with ID starting with 0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958 not found: ID does not exist" containerID="0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.484692 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958"} err="failed to get container status \"0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958\": rpc error: code = NotFound desc = could not find container \"0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958\": container with ID starting with 0693a19cd174762fb075c456e2cc135e5d80fdde6a63bc57c71d219a253c0958 not found: ID does not exist" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.485278 4858 scope.go:117] "RemoveContainer" containerID="6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284" Dec 05 14:38:28 crc kubenswrapper[4858]: E1205 14:38:28.485742 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284\": container with ID starting with 6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284 not found: ID does not exist" containerID="6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.485762 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284"} err="failed to get container status \"6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284\": rpc error: code = NotFound desc = could not find container \"6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284\": container with ID starting with 6df99167f180e817892301b20fd6c3e88d4871744f5f61c36cd535ae4408f284 not found: ID does not exist" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.485776 4858 scope.go:117] "RemoveContainer" containerID="7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb" Dec 05 14:38:28 crc kubenswrapper[4858]: E1205 14:38:28.486226 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb\": container with ID starting with 7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb not found: ID does not exist" containerID="7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb" Dec 05 14:38:28 crc kubenswrapper[4858]: I1205 14:38:28.486423 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb"} err="failed to get container status \"7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb\": rpc error: code = NotFound desc = could not find container \"7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb\": container with ID starting with 7c45e1625063f0c4da800c10345d2590e539a65531026cf4654e1d5045cdcfbb not found: ID does not exist" Dec 05 14:38:29 crc kubenswrapper[4858]: I1205 14:38:29.908609 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" path="/var/lib/kubelet/pods/c375524f-8972-48b3-a5c5-ce73e89fe43c/volumes" Dec 05 14:38:31 crc kubenswrapper[4858]: I1205 14:38:31.909939 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:38:31 crc kubenswrapper[4858]: E1205 14:38:31.910593 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:38:38 crc kubenswrapper[4858]: I1205 14:38:38.476613 4858 generic.go:334] "Generic (PLEG): container finished" podID="d474385c-0b18-4b0a-90b2-3ce49a444227" containerID="ed5546c08e6d3050d29fb02271aa318195308c978349b1b37e3d84d1d824e71f" exitCode=0 Dec 05 14:38:38 crc kubenswrapper[4858]: I1205 14:38:38.476868 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" event={"ID":"d474385c-0b18-4b0a-90b2-3ce49a444227","Type":"ContainerDied","Data":"ed5546c08e6d3050d29fb02271aa318195308c978349b1b37e3d84d1d824e71f"} Dec 05 14:38:39 crc kubenswrapper[4858]: I1205 14:38:39.896559 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.087580 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-inventory\") pod \"d474385c-0b18-4b0a-90b2-3ce49a444227\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.087780 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7cpn\" (UniqueName: \"kubernetes.io/projected/d474385c-0b18-4b0a-90b2-3ce49a444227-kube-api-access-x7cpn\") pod \"d474385c-0b18-4b0a-90b2-3ce49a444227\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.087878 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-ssh-key\") pod \"d474385c-0b18-4b0a-90b2-3ce49a444227\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.087918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-combined-ca-bundle\") pod \"d474385c-0b18-4b0a-90b2-3ce49a444227\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.087961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-secret-0\") pod \"d474385c-0b18-4b0a-90b2-3ce49a444227\" (UID: \"d474385c-0b18-4b0a-90b2-3ce49a444227\") " Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.093589 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d474385c-0b18-4b0a-90b2-3ce49a444227" (UID: "d474385c-0b18-4b0a-90b2-3ce49a444227"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.095981 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d474385c-0b18-4b0a-90b2-3ce49a444227-kube-api-access-x7cpn" (OuterVolumeSpecName: "kube-api-access-x7cpn") pod "d474385c-0b18-4b0a-90b2-3ce49a444227" (UID: "d474385c-0b18-4b0a-90b2-3ce49a444227"). InnerVolumeSpecName "kube-api-access-x7cpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.113880 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-inventory" (OuterVolumeSpecName: "inventory") pod "d474385c-0b18-4b0a-90b2-3ce49a444227" (UID: "d474385c-0b18-4b0a-90b2-3ce49a444227"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.114229 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d474385c-0b18-4b0a-90b2-3ce49a444227" (UID: "d474385c-0b18-4b0a-90b2-3ce49a444227"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.124428 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d474385c-0b18-4b0a-90b2-3ce49a444227" (UID: "d474385c-0b18-4b0a-90b2-3ce49a444227"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.191259 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7cpn\" (UniqueName: \"kubernetes.io/projected/d474385c-0b18-4b0a-90b2-3ce49a444227-kube-api-access-x7cpn\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.191747 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.191807 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.191818 4858 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.191846 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d474385c-0b18-4b0a-90b2-3ce49a444227-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.495808 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" event={"ID":"d474385c-0b18-4b0a-90b2-3ce49a444227","Type":"ContainerDied","Data":"e478b4ec6c7351370abbb1033a4a05073486136b63e09eb61a87db16bdb675fa"} Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.495858 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e478b4ec6c7351370abbb1033a4a05073486136b63e09eb61a87db16bdb675fa" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.495913 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jdcbv" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.588747 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5"] Dec 05 14:38:40 crc kubenswrapper[4858]: E1205 14:38:40.589313 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="extract-utilities" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.589334 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="extract-utilities" Dec 05 14:38:40 crc kubenswrapper[4858]: E1205 14:38:40.589360 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d474385c-0b18-4b0a-90b2-3ce49a444227" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.589369 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d474385c-0b18-4b0a-90b2-3ce49a444227" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 14:38:40 crc kubenswrapper[4858]: E1205 14:38:40.589385 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="registry-server" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.589392 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="registry-server" Dec 05 14:38:40 crc kubenswrapper[4858]: E1205 14:38:40.589430 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="extract-content" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.589438 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="extract-content" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.589662 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d474385c-0b18-4b0a-90b2-3ce49a444227" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.589683 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c375524f-8972-48b3-a5c5-ce73e89fe43c" containerName="registry-server" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.590600 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.599214 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.599426 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.599763 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.599777 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.599916 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.603413 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.603587 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.607924 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5"] Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703242 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703310 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703403 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703442 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gstk6\" (UniqueName: \"kubernetes.io/projected/468fbbff-77ff-4880-bd6b-7b7b70344d8d-kube-api-access-gstk6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703810 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703907 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.703944 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805706 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gstk6\" (UniqueName: \"kubernetes.io/projected/468fbbff-77ff-4880-bd6b-7b7b70344d8d-kube-api-access-gstk6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805850 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805937 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805965 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.805990 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.806025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.806049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.808107 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.811090 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.811163 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.811935 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.812926 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.813540 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.814986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.817235 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.823575 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gstk6\" (UniqueName: \"kubernetes.io/projected/468fbbff-77ff-4880-bd6b-7b7b70344d8d-kube-api-access-gstk6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-4hlj5\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:40 crc kubenswrapper[4858]: I1205 14:38:40.919943 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:38:41 crc kubenswrapper[4858]: I1205 14:38:41.529266 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5"] Dec 05 14:38:41 crc kubenswrapper[4858]: I1205 14:38:41.542196 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:38:42 crc kubenswrapper[4858]: I1205 14:38:42.512600 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" event={"ID":"468fbbff-77ff-4880-bd6b-7b7b70344d8d","Type":"ContainerStarted","Data":"e6093ce60b4fcb1e6d7bde7748fda5df4ad007fe76d52b0fee20dee1884821e6"} Dec 05 14:38:42 crc kubenswrapper[4858]: I1205 14:38:42.512890 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" event={"ID":"468fbbff-77ff-4880-bd6b-7b7b70344d8d","Type":"ContainerStarted","Data":"b6b2edbf57b372887c61d6d1b81ff614aff1b9091ff01dcc954d54461c82b580"} Dec 05 14:38:42 crc kubenswrapper[4858]: I1205 14:38:42.534520 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" podStartSLOduration=2.068761673 podStartE2EDuration="2.534501988s" podCreationTimestamp="2025-12-05 14:38:40 +0000 UTC" firstStartedPulling="2025-12-05 14:38:41.541960921 +0000 UTC m=+2530.089559060" lastFinishedPulling="2025-12-05 14:38:42.007701226 +0000 UTC m=+2530.555299375" observedRunningTime="2025-12-05 14:38:42.532744132 +0000 UTC m=+2531.080342291" watchObservedRunningTime="2025-12-05 14:38:42.534501988 +0000 UTC m=+2531.082100127" Dec 05 14:38:43 crc kubenswrapper[4858]: I1205 14:38:43.899941 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:38:43 crc kubenswrapper[4858]: E1205 14:38:43.900466 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:38:56 crc kubenswrapper[4858]: I1205 14:38:56.899592 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:38:56 crc kubenswrapper[4858]: E1205 14:38:56.900505 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:39:10 crc kubenswrapper[4858]: I1205 14:39:10.906976 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:39:10 crc kubenswrapper[4858]: E1205 14:39:10.907768 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:39:25 crc kubenswrapper[4858]: I1205 14:39:25.901719 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:39:25 crc kubenswrapper[4858]: E1205 14:39:25.902482 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:39:36 crc kubenswrapper[4858]: I1205 14:39:36.899954 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:39:36 crc kubenswrapper[4858]: E1205 14:39:36.900617 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:39:48 crc kubenswrapper[4858]: I1205 14:39:48.899476 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:39:48 crc kubenswrapper[4858]: E1205 14:39:48.900177 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:40:02 crc kubenswrapper[4858]: I1205 14:40:02.899049 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:40:02 crc kubenswrapper[4858]: E1205 14:40:02.899822 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:40:15 crc kubenswrapper[4858]: I1205 14:40:15.900215 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:40:15 crc kubenswrapper[4858]: E1205 14:40:15.900958 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:40:26 crc kubenswrapper[4858]: I1205 14:40:26.899359 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:40:26 crc kubenswrapper[4858]: E1205 14:40:26.900045 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:40:39 crc kubenswrapper[4858]: I1205 14:40:39.899043 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:40:39 crc kubenswrapper[4858]: E1205 14:40:39.899713 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:40:50 crc kubenswrapper[4858]: I1205 14:40:50.899396 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:40:50 crc kubenswrapper[4858]: E1205 14:40:50.900146 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:41:02 crc kubenswrapper[4858]: I1205 14:41:02.898822 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:41:02 crc kubenswrapper[4858]: E1205 14:41:02.899430 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:41:17 crc kubenswrapper[4858]: I1205 14:41:17.900400 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:41:17 crc kubenswrapper[4858]: E1205 14:41:17.901171 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:41:29 crc kubenswrapper[4858]: I1205 14:41:29.899451 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:41:29 crc kubenswrapper[4858]: E1205 14:41:29.900221 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:41:41 crc kubenswrapper[4858]: I1205 14:41:41.908893 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:41:41 crc kubenswrapper[4858]: E1205 14:41:41.910020 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:41:55 crc kubenswrapper[4858]: I1205 14:41:55.341551 4858 generic.go:334] "Generic (PLEG): container finished" podID="468fbbff-77ff-4880-bd6b-7b7b70344d8d" containerID="e6093ce60b4fcb1e6d7bde7748fda5df4ad007fe76d52b0fee20dee1884821e6" exitCode=0 Dec 05 14:41:55 crc kubenswrapper[4858]: I1205 14:41:55.341651 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" event={"ID":"468fbbff-77ff-4880-bd6b-7b7b70344d8d","Type":"ContainerDied","Data":"e6093ce60b4fcb1e6d7bde7748fda5df4ad007fe76d52b0fee20dee1884821e6"} Dec 05 14:41:55 crc kubenswrapper[4858]: I1205 14:41:55.899334 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.354049 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"e6cd5a25857bdb027781c4ff36790c9019ff1005158df128e6511ad21138bb31"} Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.823160 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.900585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gstk6\" (UniqueName: \"kubernetes.io/projected/468fbbff-77ff-4880-bd6b-7b7b70344d8d-kube-api-access-gstk6\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.900627 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-0\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.900655 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-1\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.900685 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-inventory\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.900734 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-1\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.901931 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-ssh-key\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.901972 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-extra-config-0\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.901995 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-combined-ca-bundle\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.902024 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-0\") pod \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\" (UID: \"468fbbff-77ff-4880-bd6b-7b7b70344d8d\") " Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.906965 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/468fbbff-77ff-4880-bd6b-7b7b70344d8d-kube-api-access-gstk6" (OuterVolumeSpecName: "kube-api-access-gstk6") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "kube-api-access-gstk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.921767 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:56 crc kubenswrapper[4858]: I1205 14:41:56.948075 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.009033 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.012159 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.012181 4858 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.012192 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gstk6\" (UniqueName: \"kubernetes.io/projected/468fbbff-77ff-4880-bd6b-7b7b70344d8d-kube-api-access-gstk6\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.012201 4858 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.016297 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.016388 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.027588 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.033019 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.068989 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-inventory" (OuterVolumeSpecName: "inventory") pod "468fbbff-77ff-4880-bd6b-7b7b70344d8d" (UID: "468fbbff-77ff-4880-bd6b-7b7b70344d8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.116012 4858 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.116052 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.116062 4858 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.116073 4858 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.116081 4858 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/468fbbff-77ff-4880-bd6b-7b7b70344d8d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.363670 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" event={"ID":"468fbbff-77ff-4880-bd6b-7b7b70344d8d","Type":"ContainerDied","Data":"b6b2edbf57b372887c61d6d1b81ff614aff1b9091ff01dcc954d54461c82b580"} Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.363758 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b2edbf57b372887c61d6d1b81ff614aff1b9091ff01dcc954d54461c82b580" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.363934 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-4hlj5" Dec 05 14:41:57 crc kubenswrapper[4858]: E1205 14:41:57.424541 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod468fbbff_77ff_4880_bd6b_7b7b70344d8d.slice\": RecentStats: unable to find data in memory cache]" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.494522 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj"] Dec 05 14:41:57 crc kubenswrapper[4858]: E1205 14:41:57.494986 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="468fbbff-77ff-4880-bd6b-7b7b70344d8d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.495003 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="468fbbff-77ff-4880-bd6b-7b7b70344d8d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.495178 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="468fbbff-77ff-4880-bd6b-7b7b70344d8d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.495865 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.499169 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q8b8c" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.499283 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.499415 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.499666 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.501645 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.502409 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj"] Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628298 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628380 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628407 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628433 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628481 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6gg\" (UniqueName: \"kubernetes.io/projected/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-kube-api-access-ps6gg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.628516 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.730608 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.730686 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.730761 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.731859 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.731957 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.731998 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6gg\" (UniqueName: \"kubernetes.io/projected/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-kube-api-access-ps6gg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.732050 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.736205 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.736701 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.738345 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.738763 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.744600 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.744621 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.748954 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6gg\" (UniqueName: \"kubernetes.io/projected/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-kube-api-access-ps6gg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-j48gj\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:57 crc kubenswrapper[4858]: I1205 14:41:57.811567 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:41:58 crc kubenswrapper[4858]: I1205 14:41:58.339451 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj"] Dec 05 14:41:58 crc kubenswrapper[4858]: I1205 14:41:58.399438 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" event={"ID":"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638","Type":"ContainerStarted","Data":"949c2b77bd6c630ae807002cf1e48603c9d2420f169e4c9f5b7a7fa24c720b48"} Dec 05 14:41:59 crc kubenswrapper[4858]: I1205 14:41:59.409916 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" event={"ID":"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638","Type":"ContainerStarted","Data":"954b353534e38269ad6fa9c947c6723c61fb787e2e580f3b74bedc337e03642a"} Dec 05 14:41:59 crc kubenswrapper[4858]: I1205 14:41:59.430763 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" podStartSLOduration=1.935649348 podStartE2EDuration="2.430744219s" podCreationTimestamp="2025-12-05 14:41:57 +0000 UTC" firstStartedPulling="2025-12-05 14:41:58.361681657 +0000 UTC m=+2726.909279796" lastFinishedPulling="2025-12-05 14:41:58.856776528 +0000 UTC m=+2727.404374667" observedRunningTime="2025-12-05 14:41:59.425354655 +0000 UTC m=+2727.972952804" watchObservedRunningTime="2025-12-05 14:41:59.430744219 +0000 UTC m=+2727.978342358" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.354280 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ncpdd"] Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.356938 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.368354 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncpdd"] Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.501809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-utilities\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.501940 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hzl\" (UniqueName: \"kubernetes.io/projected/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-kube-api-access-92hzl\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.502211 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-catalog-content\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.604025 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-utilities\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.604098 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92hzl\" (UniqueName: \"kubernetes.io/projected/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-kube-api-access-92hzl\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.604156 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-catalog-content\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.604536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-utilities\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.604552 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-catalog-content\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.624516 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92hzl\" (UniqueName: \"kubernetes.io/projected/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-kube-api-access-92hzl\") pod \"community-operators-ncpdd\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:57 crc kubenswrapper[4858]: I1205 14:43:57.720349 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:43:58 crc kubenswrapper[4858]: I1205 14:43:58.245873 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncpdd"] Dec 05 14:43:58 crc kubenswrapper[4858]: I1205 14:43:58.461268 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerStarted","Data":"acb54ed7717a971fc0f047d27f94f6f593ceb00bb72e4b40f9807eadba62ca7c"} Dec 05 14:44:00 crc kubenswrapper[4858]: I1205 14:44:00.479211 4858 generic.go:334] "Generic (PLEG): container finished" podID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerID="0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c" exitCode=0 Dec 05 14:44:00 crc kubenswrapper[4858]: I1205 14:44:00.479317 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerDied","Data":"0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c"} Dec 05 14:44:00 crc kubenswrapper[4858]: I1205 14:44:00.481042 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:44:01 crc kubenswrapper[4858]: I1205 14:44:01.490816 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerStarted","Data":"05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb"} Dec 05 14:44:05 crc kubenswrapper[4858]: I1205 14:44:05.537613 4858 generic.go:334] "Generic (PLEG): container finished" podID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerID="05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb" exitCode=0 Dec 05 14:44:05 crc kubenswrapper[4858]: I1205 14:44:05.537687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerDied","Data":"05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb"} Dec 05 14:44:06 crc kubenswrapper[4858]: I1205 14:44:06.548146 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerStarted","Data":"45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c"} Dec 05 14:44:06 crc kubenswrapper[4858]: I1205 14:44:06.576519 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ncpdd" podStartSLOduration=3.91517677 podStartE2EDuration="9.576498813s" podCreationTimestamp="2025-12-05 14:43:57 +0000 UTC" firstStartedPulling="2025-12-05 14:44:00.480704093 +0000 UTC m=+2849.028302232" lastFinishedPulling="2025-12-05 14:44:06.142026136 +0000 UTC m=+2854.689624275" observedRunningTime="2025-12-05 14:44:06.570018932 +0000 UTC m=+2855.117617071" watchObservedRunningTime="2025-12-05 14:44:06.576498813 +0000 UTC m=+2855.124096952" Dec 05 14:44:07 crc kubenswrapper[4858]: I1205 14:44:07.720715 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:44:07 crc kubenswrapper[4858]: I1205 14:44:07.721294 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:44:08 crc kubenswrapper[4858]: I1205 14:44:08.769378 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ncpdd" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="registry-server" probeResult="failure" output=< Dec 05 14:44:08 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:44:08 crc kubenswrapper[4858]: > Dec 05 14:44:14 crc kubenswrapper[4858]: I1205 14:44:14.760620 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:44:14 crc kubenswrapper[4858]: I1205 14:44:14.761123 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:44:17 crc kubenswrapper[4858]: I1205 14:44:17.776194 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:44:17 crc kubenswrapper[4858]: I1205 14:44:17.835357 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:44:18 crc kubenswrapper[4858]: I1205 14:44:18.013474 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncpdd"] Dec 05 14:44:19 crc kubenswrapper[4858]: I1205 14:44:19.698256 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ncpdd" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="registry-server" containerID="cri-o://45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c" gracePeriod=2 Dec 05 14:44:20 crc kubenswrapper[4858]: I1205 14:44:20.701188 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:44:20 crc kubenswrapper[4858]: E1205 14:44:20.857024 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4188f88_ba9a_4237_9fe3_54d920a7a2a3.slice/crio-conmon-45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.475592 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.562612 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92hzl\" (UniqueName: \"kubernetes.io/projected/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-kube-api-access-92hzl\") pod \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.563251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-utilities\") pod \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.563337 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-catalog-content\") pod \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\" (UID: \"c4188f88-ba9a-4237-9fe3-54d920a7a2a3\") " Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.567284 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-utilities" (OuterVolumeSpecName: "utilities") pod "c4188f88-ba9a-4237-9fe3-54d920a7a2a3" (UID: "c4188f88-ba9a-4237-9fe3-54d920a7a2a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.597315 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-kube-api-access-92hzl" (OuterVolumeSpecName: "kube-api-access-92hzl") pod "c4188f88-ba9a-4237-9fe3-54d920a7a2a3" (UID: "c4188f88-ba9a-4237-9fe3-54d920a7a2a3"). InnerVolumeSpecName "kube-api-access-92hzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.613327 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4188f88-ba9a-4237-9fe3-54d920a7a2a3" (UID: "c4188f88-ba9a-4237-9fe3-54d920a7a2a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.665925 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92hzl\" (UniqueName: \"kubernetes.io/projected/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-kube-api-access-92hzl\") on node \"crc\" DevicePath \"\"" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.665999 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.666012 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4188f88-ba9a-4237-9fe3-54d920a7a2a3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.715277 4858 generic.go:334] "Generic (PLEG): container finished" podID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerID="45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c" exitCode=0 Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.715318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerDied","Data":"45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c"} Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.715344 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncpdd" event={"ID":"c4188f88-ba9a-4237-9fe3-54d920a7a2a3","Type":"ContainerDied","Data":"acb54ed7717a971fc0f047d27f94f6f593ceb00bb72e4b40f9807eadba62ca7c"} Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.715361 4858 scope.go:117] "RemoveContainer" containerID="45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.715484 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncpdd" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.747659 4858 scope.go:117] "RemoveContainer" containerID="05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.755899 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncpdd"] Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.765698 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ncpdd"] Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.784255 4858 scope.go:117] "RemoveContainer" containerID="0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.834488 4858 scope.go:117] "RemoveContainer" containerID="45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c" Dec 05 14:44:21 crc kubenswrapper[4858]: E1205 14:44:21.839248 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c\": container with ID starting with 45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c not found: ID does not exist" containerID="45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.839280 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c"} err="failed to get container status \"45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c\": rpc error: code = NotFound desc = could not find container \"45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c\": container with ID starting with 45fd4642812949abed93de96dba34aafe81a1790d92da8099263470539ed8a0c not found: ID does not exist" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.839300 4858 scope.go:117] "RemoveContainer" containerID="05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb" Dec 05 14:44:21 crc kubenswrapper[4858]: E1205 14:44:21.839872 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb\": container with ID starting with 05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb not found: ID does not exist" containerID="05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.839900 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb"} err="failed to get container status \"05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb\": rpc error: code = NotFound desc = could not find container \"05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb\": container with ID starting with 05a6e0ace06e20846f9050c1f7c6ddf99d0b374fe377bb41eba1c22b587819cb not found: ID does not exist" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.839923 4858 scope.go:117] "RemoveContainer" containerID="0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c" Dec 05 14:44:21 crc kubenswrapper[4858]: E1205 14:44:21.840157 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c\": container with ID starting with 0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c not found: ID does not exist" containerID="0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.840177 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c"} err="failed to get container status \"0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c\": rpc error: code = NotFound desc = could not find container \"0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c\": container with ID starting with 0de64a3ed1870ecac03d70c985b0a7bc0ba7c45e1f7a156a8cf93573f5605d6c not found: ID does not exist" Dec 05 14:44:21 crc kubenswrapper[4858]: I1205 14:44:21.910219 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" path="/var/lib/kubelet/pods/c4188f88-ba9a-4237-9fe3-54d920a7a2a3/volumes" Dec 05 14:44:44 crc kubenswrapper[4858]: I1205 14:44:44.760569 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:44:44 crc kubenswrapper[4858]: I1205 14:44:44.761180 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.233701 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp"] Dec 05 14:45:00 crc kubenswrapper[4858]: E1205 14:45:00.234657 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="extract-content" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.234674 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="extract-content" Dec 05 14:45:00 crc kubenswrapper[4858]: E1205 14:45:00.234695 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="registry-server" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.234703 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="registry-server" Dec 05 14:45:00 crc kubenswrapper[4858]: E1205 14:45:00.234746 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="extract-utilities" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.234755 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="extract-utilities" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.235024 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4188f88-ba9a-4237-9fe3-54d920a7a2a3" containerName="registry-server" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.235782 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.237970 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.244073 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp"] Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.245727 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.306563 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d97xs\" (UniqueName: \"kubernetes.io/projected/afe05d25-a105-41a6-9443-eee7578072c4-kube-api-access-d97xs\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.306629 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe05d25-a105-41a6-9443-eee7578072c4-config-volume\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.306692 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe05d25-a105-41a6-9443-eee7578072c4-secret-volume\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.408667 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d97xs\" (UniqueName: \"kubernetes.io/projected/afe05d25-a105-41a6-9443-eee7578072c4-kube-api-access-d97xs\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.408728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe05d25-a105-41a6-9443-eee7578072c4-config-volume\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.408808 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe05d25-a105-41a6-9443-eee7578072c4-secret-volume\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.409772 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe05d25-a105-41a6-9443-eee7578072c4-config-volume\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.414170 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe05d25-a105-41a6-9443-eee7578072c4-secret-volume\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.426104 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d97xs\" (UniqueName: \"kubernetes.io/projected/afe05d25-a105-41a6-9443-eee7578072c4-kube-api-access-d97xs\") pod \"collect-profiles-29415765-s54kp\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.553262 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:00 crc kubenswrapper[4858]: I1205 14:45:00.985720 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp"] Dec 05 14:45:01 crc kubenswrapper[4858]: I1205 14:45:01.054738 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" event={"ID":"afe05d25-a105-41a6-9443-eee7578072c4","Type":"ContainerStarted","Data":"f9213f0b1042202dd30aa747a4c16ce5d16dd4479f2433cf4d92783cdd3885c5"} Dec 05 14:45:01 crc kubenswrapper[4858]: I1205 14:45:01.057363 4858 generic.go:334] "Generic (PLEG): container finished" podID="ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" containerID="954b353534e38269ad6fa9c947c6723c61fb787e2e580f3b74bedc337e03642a" exitCode=0 Dec 05 14:45:01 crc kubenswrapper[4858]: I1205 14:45:01.057455 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" event={"ID":"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638","Type":"ContainerDied","Data":"954b353534e38269ad6fa9c947c6723c61fb787e2e580f3b74bedc337e03642a"} Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.067403 4858 generic.go:334] "Generic (PLEG): container finished" podID="afe05d25-a105-41a6-9443-eee7578072c4" containerID="0c2137015b02687e1160d93a6dce359fcf707af437d4cc5bc28b8d0f8df676dc" exitCode=0 Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.067481 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" event={"ID":"afe05d25-a105-41a6-9443-eee7578072c4","Type":"ContainerDied","Data":"0c2137015b02687e1160d93a6dce359fcf707af437d4cc5bc28b8d0f8df676dc"} Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.847783 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.962691 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-1\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.962754 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6gg\" (UniqueName: \"kubernetes.io/projected/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-kube-api-access-ps6gg\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.962936 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-inventory\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.962973 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-telemetry-combined-ca-bundle\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.963024 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-2\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.963079 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-0\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.963121 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ssh-key\") pod \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\" (UID: \"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638\") " Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.983630 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-kube-api-access-ps6gg" (OuterVolumeSpecName: "kube-api-access-ps6gg") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "kube-api-access-ps6gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:45:02 crc kubenswrapper[4858]: I1205 14:45:02.983630 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.002363 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.003460 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.004559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-inventory" (OuterVolumeSpecName: "inventory") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.005738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.023411 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" (UID: "ccd90cd7-1d6f-4be1-a404-b81e6e5b6638"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065458 4858 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065492 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065507 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065518 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065530 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065542 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6gg\" (UniqueName: \"kubernetes.io/projected/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-kube-api-access-ps6gg\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.065552 4858 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ccd90cd7-1d6f-4be1-a404-b81e6e5b6638-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.085310 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.085385 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-j48gj" event={"ID":"ccd90cd7-1d6f-4be1-a404-b81e6e5b6638","Type":"ContainerDied","Data":"949c2b77bd6c630ae807002cf1e48603c9d2420f169e4c9f5b7a7fa24c720b48"} Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.085416 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949c2b77bd6c630ae807002cf1e48603c9d2420f169e4c9f5b7a7fa24c720b48" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.392686 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.471985 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe05d25-a105-41a6-9443-eee7578072c4-config-volume\") pod \"afe05d25-a105-41a6-9443-eee7578072c4\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.472028 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe05d25-a105-41a6-9443-eee7578072c4-secret-volume\") pod \"afe05d25-a105-41a6-9443-eee7578072c4\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.472182 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d97xs\" (UniqueName: \"kubernetes.io/projected/afe05d25-a105-41a6-9443-eee7578072c4-kube-api-access-d97xs\") pod \"afe05d25-a105-41a6-9443-eee7578072c4\" (UID: \"afe05d25-a105-41a6-9443-eee7578072c4\") " Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.472776 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe05d25-a105-41a6-9443-eee7578072c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "afe05d25-a105-41a6-9443-eee7578072c4" (UID: "afe05d25-a105-41a6-9443-eee7578072c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.476708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe05d25-a105-41a6-9443-eee7578072c4-kube-api-access-d97xs" (OuterVolumeSpecName: "kube-api-access-d97xs") pod "afe05d25-a105-41a6-9443-eee7578072c4" (UID: "afe05d25-a105-41a6-9443-eee7578072c4"). InnerVolumeSpecName "kube-api-access-d97xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.476789 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe05d25-a105-41a6-9443-eee7578072c4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afe05d25-a105-41a6-9443-eee7578072c4" (UID: "afe05d25-a105-41a6-9443-eee7578072c4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.575011 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe05d25-a105-41a6-9443-eee7578072c4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.575337 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe05d25-a105-41a6-9443-eee7578072c4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:03 crc kubenswrapper[4858]: I1205 14:45:03.575420 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d97xs\" (UniqueName: \"kubernetes.io/projected/afe05d25-a105-41a6-9443-eee7578072c4-kube-api-access-d97xs\") on node \"crc\" DevicePath \"\"" Dec 05 14:45:04 crc kubenswrapper[4858]: I1205 14:45:04.094626 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" event={"ID":"afe05d25-a105-41a6-9443-eee7578072c4","Type":"ContainerDied","Data":"f9213f0b1042202dd30aa747a4c16ce5d16dd4479f2433cf4d92783cdd3885c5"} Dec 05 14:45:04 crc kubenswrapper[4858]: I1205 14:45:04.094955 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9213f0b1042202dd30aa747a4c16ce5d16dd4479f2433cf4d92783cdd3885c5" Dec 05 14:45:04 crc kubenswrapper[4858]: I1205 14:45:04.094692 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp" Dec 05 14:45:04 crc kubenswrapper[4858]: I1205 14:45:04.464597 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6"] Dec 05 14:45:04 crc kubenswrapper[4858]: I1205 14:45:04.475156 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415720-fnqq6"] Dec 05 14:45:05 crc kubenswrapper[4858]: I1205 14:45:05.909027 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5b8ed5-1641-4428-8fff-05deab84fe14" path="/var/lib/kubelet/pods/0a5b8ed5-1641-4428-8fff-05deab84fe14/volumes" Dec 05 14:45:14 crc kubenswrapper[4858]: I1205 14:45:14.760222 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:45:14 crc kubenswrapper[4858]: I1205 14:45:14.761781 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:45:14 crc kubenswrapper[4858]: I1205 14:45:14.761943 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:45:14 crc kubenswrapper[4858]: I1205 14:45:14.762781 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6cd5a25857bdb027781c4ff36790c9019ff1005158df128e6511ad21138bb31"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:45:14 crc kubenswrapper[4858]: I1205 14:45:14.762933 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://e6cd5a25857bdb027781c4ff36790c9019ff1005158df128e6511ad21138bb31" gracePeriod=600 Dec 05 14:45:15 crc kubenswrapper[4858]: I1205 14:45:15.177877 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="e6cd5a25857bdb027781c4ff36790c9019ff1005158df128e6511ad21138bb31" exitCode=0 Dec 05 14:45:15 crc kubenswrapper[4858]: I1205 14:45:15.177919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"e6cd5a25857bdb027781c4ff36790c9019ff1005158df128e6511ad21138bb31"} Dec 05 14:45:15 crc kubenswrapper[4858]: I1205 14:45:15.178232 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae"} Dec 05 14:45:15 crc kubenswrapper[4858]: I1205 14:45:15.178260 4858 scope.go:117] "RemoveContainer" containerID="02e69ac4963d131614f81ec03a489008d8aa58b28159862c502ee6ea90342a96" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.096098 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Dec 05 14:45:58 crc kubenswrapper[4858]: E1205 14:45:58.097379 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.097401 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 14:45:58 crc kubenswrapper[4858]: E1205 14:45:58.097429 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe05d25-a105-41a6-9443-eee7578072c4" containerName="collect-profiles" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.097436 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe05d25-a105-41a6-9443-eee7578072c4" containerName="collect-profiles" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.097633 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd90cd7-1d6f-4be1-a404-b81e6e5b6638" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.097666 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe05d25-a105-41a6-9443-eee7578072c4" containerName="collect-profiles" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.098448 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.102396 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.102462 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.103065 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.103316 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xzq5q" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.110377 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.214902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj622\" (UniqueName: \"kubernetes.io/projected/2e4134d1-108e-42bc-81a5-7704e6dff1d2-kube-api-access-wj622\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.214950 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.214987 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.215038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.215092 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.215137 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.215176 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.215240 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.215290 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317439 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317491 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj622\" (UniqueName: \"kubernetes.io/projected/2e4134d1-108e-42bc-81a5-7704e6dff1d2-kube-api-access-wj622\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317520 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317548 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317594 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317680 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.317710 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.318030 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.318555 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.318639 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.319159 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.319605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.324145 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.327793 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.332266 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.335571 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj622\" (UniqueName: \"kubernetes.io/projected/2e4134d1-108e-42bc-81a5-7704e6dff1d2-kube-api-access-wj622\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.354388 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.422044 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 14:45:58 crc kubenswrapper[4858]: I1205 14:45:58.988038 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Dec 05 14:45:58 crc kubenswrapper[4858]: W1205 14:45:58.988636 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e4134d1_108e_42bc_81a5_7704e6dff1d2.slice/crio-9b8d93875e94f8c82d6ea5e6ae892756808364ff368e1764496a35f2dbc56036 WatchSource:0}: Error finding container 9b8d93875e94f8c82d6ea5e6ae892756808364ff368e1764496a35f2dbc56036: Status 404 returned error can't find the container with id 9b8d93875e94f8c82d6ea5e6ae892756808364ff368e1764496a35f2dbc56036 Dec 05 14:45:59 crc kubenswrapper[4858]: I1205 14:45:59.333124 4858 scope.go:117] "RemoveContainer" containerID="0e0ae0af0999967084d2efaeef15f83e57ad62a62a536eafae921ac7df148a6a" Dec 05 14:45:59 crc kubenswrapper[4858]: I1205 14:45:59.545117 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"2e4134d1-108e-42bc-81a5-7704e6dff1d2","Type":"ContainerStarted","Data":"9b8d93875e94f8c82d6ea5e6ae892756808364ff368e1764496a35f2dbc56036"} Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.656038 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbnmb"] Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.658725 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.670858 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbnmb"] Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.806605 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-utilities\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.806679 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-catalog-content\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.806722 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jcz\" (UniqueName: \"kubernetes.io/projected/0c28cc37-71a0-4f36-b24a-d296144c69c3-kube-api-access-85jcz\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.908356 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85jcz\" (UniqueName: \"kubernetes.io/projected/0c28cc37-71a0-4f36-b24a-d296144c69c3-kube-api-access-85jcz\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.908499 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-utilities\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.908550 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-catalog-content\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.908956 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-catalog-content\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.909093 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-utilities\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.931512 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85jcz\" (UniqueName: \"kubernetes.io/projected/0c28cc37-71a0-4f36-b24a-d296144c69c3-kube-api-access-85jcz\") pod \"redhat-operators-cbnmb\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:27 crc kubenswrapper[4858]: I1205 14:46:27.994374 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:40 crc kubenswrapper[4858]: E1205 14:46:40.405704 4858 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:46:40 crc kubenswrapper[4858]: E1205 14:46:40.406395 4858 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.97:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd" Dec 05 14:46:40 crc kubenswrapper[4858]: E1205 14:46:40.409320 4858 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.97:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj622,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(2e4134d1-108e-42bc-81a5-7704e6dff1d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 14:46:40 crc kubenswrapper[4858]: E1205 14:46:40.411547 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="2e4134d1-108e-42bc-81a5-7704e6dff1d2" Dec 05 14:46:40 crc kubenswrapper[4858]: I1205 14:46:40.933798 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbnmb"] Dec 05 14:46:40 crc kubenswrapper[4858]: I1205 14:46:40.966794 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerStarted","Data":"86fae80223fdc51b72a1d986e9852e2b381b2b9fa1524b534170a67c3726e6b8"} Dec 05 14:46:40 crc kubenswrapper[4858]: E1205 14:46:40.978878 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.97:5001/podified-antelope-centos9/openstack-tempest-all:fa2bb8efef6782c26ea7f1675eeb36dd\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="2e4134d1-108e-42bc-81a5-7704e6dff1d2" Dec 05 14:46:41 crc kubenswrapper[4858]: I1205 14:46:41.976885 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerID="24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344" exitCode=0 Dec 05 14:46:41 crc kubenswrapper[4858]: I1205 14:46:41.977137 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerDied","Data":"24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344"} Dec 05 14:46:42 crc kubenswrapper[4858]: I1205 14:46:42.986285 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerStarted","Data":"463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09"} Dec 05 14:46:48 crc kubenswrapper[4858]: I1205 14:46:48.027093 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerID="463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09" exitCode=0 Dec 05 14:46:48 crc kubenswrapper[4858]: I1205 14:46:48.027217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerDied","Data":"463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09"} Dec 05 14:46:52 crc kubenswrapper[4858]: I1205 14:46:52.067025 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerStarted","Data":"fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a"} Dec 05 14:46:52 crc kubenswrapper[4858]: I1205 14:46:52.925861 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbnmb" podStartSLOduration=17.941423868 podStartE2EDuration="25.925841546s" podCreationTimestamp="2025-12-05 14:46:27 +0000 UTC" firstStartedPulling="2025-12-05 14:46:41.979570563 +0000 UTC m=+3010.527168712" lastFinishedPulling="2025-12-05 14:46:49.963988251 +0000 UTC m=+3018.511586390" observedRunningTime="2025-12-05 14:46:52.085504653 +0000 UTC m=+3020.633102792" watchObservedRunningTime="2025-12-05 14:46:52.925841546 +0000 UTC m=+3021.473439685" Dec 05 14:46:55 crc kubenswrapper[4858]: I1205 14:46:55.710765 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 14:46:57 crc kubenswrapper[4858]: I1205 14:46:57.995525 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:57 crc kubenswrapper[4858]: I1205 14:46:57.996139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:46:58 crc kubenswrapper[4858]: I1205 14:46:58.121608 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"2e4134d1-108e-42bc-81a5-7704e6dff1d2","Type":"ContainerStarted","Data":"b8fd651619c60c9da949e803155a4eea9a0af4412035cf97531d46cb34f28bb9"} Dec 05 14:46:58 crc kubenswrapper[4858]: I1205 14:46:58.145910 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.444713324 podStartE2EDuration="1m1.145891473s" podCreationTimestamp="2025-12-05 14:45:57 +0000 UTC" firstStartedPulling="2025-12-05 14:45:59.007266659 +0000 UTC m=+2967.554864798" lastFinishedPulling="2025-12-05 14:46:55.708444808 +0000 UTC m=+3024.256042947" observedRunningTime="2025-12-05 14:46:58.137021464 +0000 UTC m=+3026.684619603" watchObservedRunningTime="2025-12-05 14:46:58.145891473 +0000 UTC m=+3026.693489612" Dec 05 14:46:59 crc kubenswrapper[4858]: I1205 14:46:59.056160 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cbnmb" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="registry-server" probeResult="failure" output=< Dec 05 14:46:59 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:46:59 crc kubenswrapper[4858]: > Dec 05 14:47:08 crc kubenswrapper[4858]: I1205 14:47:08.047198 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:47:08 crc kubenswrapper[4858]: I1205 14:47:08.104020 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:47:08 crc kubenswrapper[4858]: I1205 14:47:08.285251 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbnmb"] Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.220370 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbnmb" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="registry-server" containerID="cri-o://fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a" gracePeriod=2 Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.729656 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.863979 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85jcz\" (UniqueName: \"kubernetes.io/projected/0c28cc37-71a0-4f36-b24a-d296144c69c3-kube-api-access-85jcz\") pod \"0c28cc37-71a0-4f36-b24a-d296144c69c3\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.864181 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-utilities\") pod \"0c28cc37-71a0-4f36-b24a-d296144c69c3\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.864223 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-catalog-content\") pod \"0c28cc37-71a0-4f36-b24a-d296144c69c3\" (UID: \"0c28cc37-71a0-4f36-b24a-d296144c69c3\") " Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.865343 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-utilities" (OuterVolumeSpecName: "utilities") pod "0c28cc37-71a0-4f36-b24a-d296144c69c3" (UID: "0c28cc37-71a0-4f36-b24a-d296144c69c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.880414 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c28cc37-71a0-4f36-b24a-d296144c69c3-kube-api-access-85jcz" (OuterVolumeSpecName: "kube-api-access-85jcz") pod "0c28cc37-71a0-4f36-b24a-d296144c69c3" (UID: "0c28cc37-71a0-4f36-b24a-d296144c69c3"). InnerVolumeSpecName "kube-api-access-85jcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.967377 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.967424 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85jcz\" (UniqueName: \"kubernetes.io/projected/0c28cc37-71a0-4f36-b24a-d296144c69c3-kube-api-access-85jcz\") on node \"crc\" DevicePath \"\"" Dec 05 14:47:09 crc kubenswrapper[4858]: I1205 14:47:09.969555 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c28cc37-71a0-4f36-b24a-d296144c69c3" (UID: "0c28cc37-71a0-4f36-b24a-d296144c69c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.069621 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c28cc37-71a0-4f36-b24a-d296144c69c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.230120 4858 generic.go:334] "Generic (PLEG): container finished" podID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerID="fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a" exitCode=0 Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.230357 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbnmb" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.230379 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerDied","Data":"fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a"} Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.231949 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbnmb" event={"ID":"0c28cc37-71a0-4f36-b24a-d296144c69c3","Type":"ContainerDied","Data":"86fae80223fdc51b72a1d986e9852e2b381b2b9fa1524b534170a67c3726e6b8"} Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.231990 4858 scope.go:117] "RemoveContainer" containerID="fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.254292 4858 scope.go:117] "RemoveContainer" containerID="463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.276581 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbnmb"] Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.285151 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbnmb"] Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.286597 4858 scope.go:117] "RemoveContainer" containerID="24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.339129 4858 scope.go:117] "RemoveContainer" containerID="fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a" Dec 05 14:47:10 crc kubenswrapper[4858]: E1205 14:47:10.339524 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a\": container with ID starting with fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a not found: ID does not exist" containerID="fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.339564 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a"} err="failed to get container status \"fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a\": rpc error: code = NotFound desc = could not find container \"fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a\": container with ID starting with fa4368e9b824f296bb5dcb4c924ae2ecd3f0ea84422a07c7c8702ce39d6c737a not found: ID does not exist" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.339589 4858 scope.go:117] "RemoveContainer" containerID="463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09" Dec 05 14:47:10 crc kubenswrapper[4858]: E1205 14:47:10.339806 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09\": container with ID starting with 463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09 not found: ID does not exist" containerID="463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.339852 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09"} err="failed to get container status \"463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09\": rpc error: code = NotFound desc = could not find container \"463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09\": container with ID starting with 463458a420ce20909ccb09099d83eef127aa47df8c574310ca9c366bf3e47e09 not found: ID does not exist" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.339869 4858 scope.go:117] "RemoveContainer" containerID="24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344" Dec 05 14:47:10 crc kubenswrapper[4858]: E1205 14:47:10.340138 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344\": container with ID starting with 24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344 not found: ID does not exist" containerID="24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344" Dec 05 14:47:10 crc kubenswrapper[4858]: I1205 14:47:10.340164 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344"} err="failed to get container status \"24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344\": rpc error: code = NotFound desc = could not find container \"24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344\": container with ID starting with 24a018d23cabc1a39b3ad9fb654ba51661b4ff4b4b04b1b8219172b905d4c344 not found: ID does not exist" Dec 05 14:47:11 crc kubenswrapper[4858]: I1205 14:47:11.909640 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" path="/var/lib/kubelet/pods/0c28cc37-71a0-4f36-b24a-d296144c69c3/volumes" Dec 05 14:47:32 crc kubenswrapper[4858]: I1205 14:47:32.287759 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-qbj7t" podUID="b87af213-3539-45a1-bbe5-c4fd1161ff1b" containerName="registry-server" probeResult="failure" output=< Dec 05 14:47:32 crc kubenswrapper[4858]: timeout: health rpc did not complete within 1s Dec 05 14:47:32 crc kubenswrapper[4858]: > Dec 05 14:47:44 crc kubenswrapper[4858]: I1205 14:47:44.759820 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:47:44 crc kubenswrapper[4858]: I1205 14:47:44.760498 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:48:03 crc kubenswrapper[4858]: I1205 14:48:03.685897 4858 patch_prober.go:28] interesting pod/controller-manager-74b47c9b9-pdvnc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 14:48:03 crc kubenswrapper[4858]: I1205 14:48:03.686456 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" podUID="34b7fa59-6622-4740-aa51-89d994381fe4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 14:48:14 crc kubenswrapper[4858]: I1205 14:48:14.760024 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:48:14 crc kubenswrapper[4858]: I1205 14:48:14.760513 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:48:44 crc kubenswrapper[4858]: I1205 14:48:44.759913 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:48:44 crc kubenswrapper[4858]: I1205 14:48:44.760327 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:48:44 crc kubenswrapper[4858]: I1205 14:48:44.760379 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:48:44 crc kubenswrapper[4858]: I1205 14:48:44.761220 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:48:44 crc kubenswrapper[4858]: I1205 14:48:44.761292 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" gracePeriod=600 Dec 05 14:48:44 crc kubenswrapper[4858]: E1205 14:48:44.883305 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:48:45 crc kubenswrapper[4858]: I1205 14:48:45.120704 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" exitCode=0 Dec 05 14:48:45 crc kubenswrapper[4858]: I1205 14:48:45.120750 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae"} Dec 05 14:48:45 crc kubenswrapper[4858]: I1205 14:48:45.120793 4858 scope.go:117] "RemoveContainer" containerID="e6cd5a25857bdb027781c4ff36790c9019ff1005158df128e6511ad21138bb31" Dec 05 14:48:45 crc kubenswrapper[4858]: I1205 14:48:45.121497 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:48:45 crc kubenswrapper[4858]: E1205 14:48:45.121890 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:48:55 crc kubenswrapper[4858]: I1205 14:48:55.900255 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:48:55 crc kubenswrapper[4858]: E1205 14:48:55.901032 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.935212 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qqq46"] Dec 05 14:49:03 crc kubenswrapper[4858]: E1205 14:49:03.936163 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="extract-utilities" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.936179 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="extract-utilities" Dec 05 14:49:03 crc kubenswrapper[4858]: E1205 14:49:03.936203 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="extract-content" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.936212 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="extract-content" Dec 05 14:49:03 crc kubenswrapper[4858]: E1205 14:49:03.936235 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="registry-server" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.936245 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="registry-server" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.936479 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c28cc37-71a0-4f36-b24a-d296144c69c3" containerName="registry-server" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.938564 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:03 crc kubenswrapper[4858]: I1205 14:49:03.957484 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqq46"] Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.057809 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn69z\" (UniqueName: \"kubernetes.io/projected/d688fa7c-acab-4fe3-ac33-3975b0588ceb-kube-api-access-sn69z\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.057871 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-utilities\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.057945 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-catalog-content\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.160096 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn69z\" (UniqueName: \"kubernetes.io/projected/d688fa7c-acab-4fe3-ac33-3975b0588ceb-kube-api-access-sn69z\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.160151 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-utilities\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.160223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-catalog-content\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.160718 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-catalog-content\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.161319 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-utilities\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.179965 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn69z\" (UniqueName: \"kubernetes.io/projected/d688fa7c-acab-4fe3-ac33-3975b0588ceb-kube-api-access-sn69z\") pod \"certified-operators-qqq46\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:04 crc kubenswrapper[4858]: I1205 14:49:04.274243 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:05 crc kubenswrapper[4858]: I1205 14:49:05.800858 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qqq46"] Dec 05 14:49:05 crc kubenswrapper[4858]: W1205 14:49:05.818056 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd688fa7c_acab_4fe3_ac33_3975b0588ceb.slice/crio-380a91012d47f90a8a0bd263b5d255c09738717b9070425dbc208dba56f234ce WatchSource:0}: Error finding container 380a91012d47f90a8a0bd263b5d255c09738717b9070425dbc208dba56f234ce: Status 404 returned error can't find the container with id 380a91012d47f90a8a0bd263b5d255c09738717b9070425dbc208dba56f234ce Dec 05 14:49:06 crc kubenswrapper[4858]: I1205 14:49:06.348292 4858 generic.go:334] "Generic (PLEG): container finished" podID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerID="7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0" exitCode=0 Dec 05 14:49:06 crc kubenswrapper[4858]: I1205 14:49:06.348450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerDied","Data":"7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0"} Dec 05 14:49:06 crc kubenswrapper[4858]: I1205 14:49:06.348572 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerStarted","Data":"380a91012d47f90a8a0bd263b5d255c09738717b9070425dbc208dba56f234ce"} Dec 05 14:49:06 crc kubenswrapper[4858]: I1205 14:49:06.350450 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:49:07 crc kubenswrapper[4858]: I1205 14:49:07.359286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerStarted","Data":"b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777"} Dec 05 14:49:08 crc kubenswrapper[4858]: I1205 14:49:08.900401 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:49:08 crc kubenswrapper[4858]: E1205 14:49:08.900902 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:49:12 crc kubenswrapper[4858]: I1205 14:49:12.424752 4858 generic.go:334] "Generic (PLEG): container finished" podID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerID="b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777" exitCode=0 Dec 05 14:49:12 crc kubenswrapper[4858]: I1205 14:49:12.425905 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerDied","Data":"b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777"} Dec 05 14:49:14 crc kubenswrapper[4858]: I1205 14:49:14.454973 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerStarted","Data":"989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045"} Dec 05 14:49:14 crc kubenswrapper[4858]: I1205 14:49:14.495501 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qqq46" podStartSLOduration=3.959784934 podStartE2EDuration="11.495481684s" podCreationTimestamp="2025-12-05 14:49:03 +0000 UTC" firstStartedPulling="2025-12-05 14:49:06.350247174 +0000 UTC m=+3154.897845303" lastFinishedPulling="2025-12-05 14:49:13.885943914 +0000 UTC m=+3162.433542053" observedRunningTime="2025-12-05 14:49:14.483167312 +0000 UTC m=+3163.030765471" watchObservedRunningTime="2025-12-05 14:49:14.495481684 +0000 UTC m=+3163.043079813" Dec 05 14:49:19 crc kubenswrapper[4858]: I1205 14:49:19.899366 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:49:19 crc kubenswrapper[4858]: E1205 14:49:19.900179 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:49:24 crc kubenswrapper[4858]: I1205 14:49:24.274370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:24 crc kubenswrapper[4858]: I1205 14:49:24.274939 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:24 crc kubenswrapper[4858]: I1205 14:49:24.319467 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:24 crc kubenswrapper[4858]: I1205 14:49:24.593178 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:24 crc kubenswrapper[4858]: I1205 14:49:24.645666 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqq46"] Dec 05 14:49:26 crc kubenswrapper[4858]: I1205 14:49:26.557328 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qqq46" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="registry-server" containerID="cri-o://989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045" gracePeriod=2 Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.015171 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.062439 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-utilities\") pod \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.062588 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn69z\" (UniqueName: \"kubernetes.io/projected/d688fa7c-acab-4fe3-ac33-3975b0588ceb-kube-api-access-sn69z\") pod \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.062672 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-catalog-content\") pod \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\" (UID: \"d688fa7c-acab-4fe3-ac33-3975b0588ceb\") " Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.063367 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-utilities" (OuterVolumeSpecName: "utilities") pod "d688fa7c-acab-4fe3-ac33-3975b0588ceb" (UID: "d688fa7c-acab-4fe3-ac33-3975b0588ceb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.068736 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d688fa7c-acab-4fe3-ac33-3975b0588ceb-kube-api-access-sn69z" (OuterVolumeSpecName: "kube-api-access-sn69z") pod "d688fa7c-acab-4fe3-ac33-3975b0588ceb" (UID: "d688fa7c-acab-4fe3-ac33-3975b0588ceb"). InnerVolumeSpecName "kube-api-access-sn69z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.111593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d688fa7c-acab-4fe3-ac33-3975b0588ceb" (UID: "d688fa7c-acab-4fe3-ac33-3975b0588ceb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.164876 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn69z\" (UniqueName: \"kubernetes.io/projected/d688fa7c-acab-4fe3-ac33-3975b0588ceb-kube-api-access-sn69z\") on node \"crc\" DevicePath \"\"" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.164929 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.164939 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d688fa7c-acab-4fe3-ac33-3975b0588ceb-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.568081 4858 generic.go:334] "Generic (PLEG): container finished" podID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerID="989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045" exitCode=0 Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.568184 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qqq46" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.568169 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerDied","Data":"989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045"} Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.569412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qqq46" event={"ID":"d688fa7c-acab-4fe3-ac33-3975b0588ceb","Type":"ContainerDied","Data":"380a91012d47f90a8a0bd263b5d255c09738717b9070425dbc208dba56f234ce"} Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.569507 4858 scope.go:117] "RemoveContainer" containerID="989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.605717 4858 scope.go:117] "RemoveContainer" containerID="b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.622189 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qqq46"] Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.626243 4858 scope.go:117] "RemoveContainer" containerID="7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.631900 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qqq46"] Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.669258 4858 scope.go:117] "RemoveContainer" containerID="989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045" Dec 05 14:49:27 crc kubenswrapper[4858]: E1205 14:49:27.669810 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045\": container with ID starting with 989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045 not found: ID does not exist" containerID="989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.669860 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045"} err="failed to get container status \"989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045\": rpc error: code = NotFound desc = could not find container \"989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045\": container with ID starting with 989ccb223dfb51cd31d90c971d8042315b2cdb6e24a6062a9b185ccf75cbc045 not found: ID does not exist" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.669899 4858 scope.go:117] "RemoveContainer" containerID="b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777" Dec 05 14:49:27 crc kubenswrapper[4858]: E1205 14:49:27.670278 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777\": container with ID starting with b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777 not found: ID does not exist" containerID="b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.670307 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777"} err="failed to get container status \"b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777\": rpc error: code = NotFound desc = could not find container \"b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777\": container with ID starting with b7c7d791238ecc98b5ec977d3327266143994e659dc72bbe221b4f9814376777 not found: ID does not exist" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.670325 4858 scope.go:117] "RemoveContainer" containerID="7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0" Dec 05 14:49:27 crc kubenswrapper[4858]: E1205 14:49:27.670665 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0\": container with ID starting with 7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0 not found: ID does not exist" containerID="7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.670699 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0"} err="failed to get container status \"7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0\": rpc error: code = NotFound desc = could not find container \"7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0\": container with ID starting with 7cd4ea6d4c87c5d8544ba3dc0ba63136b63700abac8fe275022c779b79e934a0 not found: ID does not exist" Dec 05 14:49:27 crc kubenswrapper[4858]: I1205 14:49:27.909211 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" path="/var/lib/kubelet/pods/d688fa7c-acab-4fe3-ac33-3975b0588ceb/volumes" Dec 05 14:49:31 crc kubenswrapper[4858]: I1205 14:49:31.910563 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:49:31 crc kubenswrapper[4858]: E1205 14:49:31.912340 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:49:46 crc kubenswrapper[4858]: I1205 14:49:46.900462 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:49:46 crc kubenswrapper[4858]: E1205 14:49:46.901198 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:50:01 crc kubenswrapper[4858]: I1205 14:50:01.906118 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:50:01 crc kubenswrapper[4858]: E1205 14:50:01.906901 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:50:04 crc kubenswrapper[4858]: I1205 14:50:04.704019 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" podUID="4c9d3c6a-fda7-468e-9099-5f09c2dbdbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:50:14 crc kubenswrapper[4858]: I1205 14:50:14.898908 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:50:14 crc kubenswrapper[4858]: E1205 14:50:14.900713 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:50:26 crc kubenswrapper[4858]: I1205 14:50:26.899913 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:50:26 crc kubenswrapper[4858]: E1205 14:50:26.900547 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:50:38 crc kubenswrapper[4858]: I1205 14:50:38.899633 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:50:38 crc kubenswrapper[4858]: E1205 14:50:38.900342 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:50:52 crc kubenswrapper[4858]: I1205 14:50:52.899772 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:50:52 crc kubenswrapper[4858]: E1205 14:50:52.900438 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:51:07 crc kubenswrapper[4858]: I1205 14:51:07.900188 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:51:07 crc kubenswrapper[4858]: E1205 14:51:07.900949 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:51:19 crc kubenswrapper[4858]: I1205 14:51:19.899424 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:51:19 crc kubenswrapper[4858]: E1205 14:51:19.900108 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.575014 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dr8b5"] Dec 05 14:51:24 crc kubenswrapper[4858]: E1205 14:51:24.576987 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="extract-content" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.577012 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="extract-content" Dec 05 14:51:24 crc kubenswrapper[4858]: E1205 14:51:24.577028 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="registry-server" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.577036 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="registry-server" Dec 05 14:51:24 crc kubenswrapper[4858]: E1205 14:51:24.577074 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="extract-utilities" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.577082 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="extract-utilities" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.577915 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d688fa7c-acab-4fe3-ac33-3975b0588ceb" containerName="registry-server" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.581477 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.607561 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr8b5"] Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.763703 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-catalog-content\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.764030 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-utilities\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.764129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp2qs\" (UniqueName: \"kubernetes.io/projected/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-kube-api-access-wp2qs\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.865938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-catalog-content\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.865986 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-utilities\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.866049 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp2qs\" (UniqueName: \"kubernetes.io/projected/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-kube-api-access-wp2qs\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.867981 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-catalog-content\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.867986 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-utilities\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.899799 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp2qs\" (UniqueName: \"kubernetes.io/projected/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-kube-api-access-wp2qs\") pod \"redhat-marketplace-dr8b5\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:24 crc kubenswrapper[4858]: I1205 14:51:24.906579 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:25 crc kubenswrapper[4858]: I1205 14:51:25.753824 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr8b5"] Dec 05 14:51:26 crc kubenswrapper[4858]: I1205 14:51:26.708263 4858 generic.go:334] "Generic (PLEG): container finished" podID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerID="4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873" exitCode=0 Dec 05 14:51:26 crc kubenswrapper[4858]: I1205 14:51:26.708350 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerDied","Data":"4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873"} Dec 05 14:51:26 crc kubenswrapper[4858]: I1205 14:51:26.709077 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerStarted","Data":"eaea64e5040a6fc07d3199e6a5f1c8c3e7f106163b50250a905712a9244bf461"} Dec 05 14:51:27 crc kubenswrapper[4858]: I1205 14:51:27.719621 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerStarted","Data":"562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea"} Dec 05 14:51:30 crc kubenswrapper[4858]: I1205 14:51:30.745065 4858 generic.go:334] "Generic (PLEG): container finished" podID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerID="562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea" exitCode=0 Dec 05 14:51:30 crc kubenswrapper[4858]: I1205 14:51:30.745098 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerDied","Data":"562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea"} Dec 05 14:51:31 crc kubenswrapper[4858]: I1205 14:51:31.760809 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerStarted","Data":"ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313"} Dec 05 14:51:31 crc kubenswrapper[4858]: I1205 14:51:31.779812 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dr8b5" podStartSLOduration=3.230132972 podStartE2EDuration="7.778744616s" podCreationTimestamp="2025-12-05 14:51:24 +0000 UTC" firstStartedPulling="2025-12-05 14:51:26.714948531 +0000 UTC m=+3295.262546670" lastFinishedPulling="2025-12-05 14:51:31.263560175 +0000 UTC m=+3299.811158314" observedRunningTime="2025-12-05 14:51:31.777175213 +0000 UTC m=+3300.324773352" watchObservedRunningTime="2025-12-05 14:51:31.778744616 +0000 UTC m=+3300.326342755" Dec 05 14:51:33 crc kubenswrapper[4858]: I1205 14:51:33.922401 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:51:33 crc kubenswrapper[4858]: E1205 14:51:33.923248 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:51:34 crc kubenswrapper[4858]: I1205 14:51:34.908502 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:34 crc kubenswrapper[4858]: I1205 14:51:34.908549 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:35 crc kubenswrapper[4858]: I1205 14:51:35.973034 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dr8b5" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="registry-server" probeResult="failure" output=< Dec 05 14:51:35 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:51:35 crc kubenswrapper[4858]: > Dec 05 14:51:44 crc kubenswrapper[4858]: I1205 14:51:44.901695 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:51:44 crc kubenswrapper[4858]: E1205 14:51:44.904298 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:51:44 crc kubenswrapper[4858]: I1205 14:51:44.967948 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:45 crc kubenswrapper[4858]: I1205 14:51:45.028695 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:45 crc kubenswrapper[4858]: I1205 14:51:45.217533 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr8b5"] Dec 05 14:51:46 crc kubenswrapper[4858]: I1205 14:51:46.962316 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dr8b5" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="registry-server" containerID="cri-o://ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313" gracePeriod=2 Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.696874 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.776274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-utilities\") pod \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.776385 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp2qs\" (UniqueName: \"kubernetes.io/projected/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-kube-api-access-wp2qs\") pod \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.776503 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-catalog-content\") pod \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\" (UID: \"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3\") " Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.778589 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-utilities" (OuterVolumeSpecName: "utilities") pod "8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" (UID: "8e2b39fe-bfef-43bf-af8d-7c02aa525fe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.794489 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-kube-api-access-wp2qs" (OuterVolumeSpecName: "kube-api-access-wp2qs") pod "8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" (UID: "8e2b39fe-bfef-43bf-af8d-7c02aa525fe3"). InnerVolumeSpecName "kube-api-access-wp2qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.796087 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" (UID: "8e2b39fe-bfef-43bf-af8d-7c02aa525fe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.879555 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.879586 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp2qs\" (UniqueName: \"kubernetes.io/projected/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-kube-api-access-wp2qs\") on node \"crc\" DevicePath \"\"" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.879600 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.972681 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr8b5" Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.972735 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerDied","Data":"ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313"} Dec 05 14:51:47 crc kubenswrapper[4858]: I1205 14:51:47.998604 4858 generic.go:334] "Generic (PLEG): container finished" podID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerID="ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313" exitCode=0 Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.001789 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr8b5" event={"ID":"8e2b39fe-bfef-43bf-af8d-7c02aa525fe3","Type":"ContainerDied","Data":"eaea64e5040a6fc07d3199e6a5f1c8c3e7f106163b50250a905712a9244bf461"} Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.001859 4858 scope.go:117] "RemoveContainer" containerID="ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.027913 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr8b5"] Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.046817 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr8b5"] Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.067585 4858 scope.go:117] "RemoveContainer" containerID="562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.112425 4858 scope.go:117] "RemoveContainer" containerID="4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.150313 4858 scope.go:117] "RemoveContainer" containerID="ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313" Dec 05 14:51:48 crc kubenswrapper[4858]: E1205 14:51:48.151815 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313\": container with ID starting with ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313 not found: ID does not exist" containerID="ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.151865 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313"} err="failed to get container status \"ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313\": rpc error: code = NotFound desc = could not find container \"ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313\": container with ID starting with ecb1ec5a8128a74bbb5371bacb6684f9d57b1349759ff8d6fe479f40f980f313 not found: ID does not exist" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.151886 4858 scope.go:117] "RemoveContainer" containerID="562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea" Dec 05 14:51:48 crc kubenswrapper[4858]: E1205 14:51:48.152220 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea\": container with ID starting with 562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea not found: ID does not exist" containerID="562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.152254 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea"} err="failed to get container status \"562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea\": rpc error: code = NotFound desc = could not find container \"562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea\": container with ID starting with 562d04bec80a872bdfdb598523cf6bac3ba05773fc3e0810330a86a1449a93ea not found: ID does not exist" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.152268 4858 scope.go:117] "RemoveContainer" containerID="4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873" Dec 05 14:51:48 crc kubenswrapper[4858]: E1205 14:51:48.152481 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873\": container with ID starting with 4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873 not found: ID does not exist" containerID="4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873" Dec 05 14:51:48 crc kubenswrapper[4858]: I1205 14:51:48.152497 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873"} err="failed to get container status \"4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873\": rpc error: code = NotFound desc = could not find container \"4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873\": container with ID starting with 4770978074da8d228f8e288dd003d6229709389c9fec0c315d47b03828ec0873 not found: ID does not exist" Dec 05 14:51:49 crc kubenswrapper[4858]: I1205 14:51:49.908253 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" path="/var/lib/kubelet/pods/8e2b39fe-bfef-43bf-af8d-7c02aa525fe3/volumes" Dec 05 14:51:57 crc kubenswrapper[4858]: I1205 14:51:57.899680 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:51:57 crc kubenswrapper[4858]: E1205 14:51:57.900499 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:52:11 crc kubenswrapper[4858]: I1205 14:52:11.907994 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:52:11 crc kubenswrapper[4858]: E1205 14:52:11.908660 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:52:25 crc kubenswrapper[4858]: I1205 14:52:25.901565 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:52:25 crc kubenswrapper[4858]: E1205 14:52:25.902517 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:52:38 crc kubenswrapper[4858]: I1205 14:52:38.899594 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:52:38 crc kubenswrapper[4858]: E1205 14:52:38.901466 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:52:52 crc kubenswrapper[4858]: I1205 14:52:52.899697 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:52:52 crc kubenswrapper[4858]: E1205 14:52:52.900426 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:53:04 crc kubenswrapper[4858]: I1205 14:53:03.904014 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:53:04 crc kubenswrapper[4858]: E1205 14:53:03.904863 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:53:14 crc kubenswrapper[4858]: I1205 14:53:14.899308 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:53:14 crc kubenswrapper[4858]: E1205 14:53:14.900087 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:53:25 crc kubenswrapper[4858]: I1205 14:53:25.899595 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:53:25 crc kubenswrapper[4858]: E1205 14:53:25.900216 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:53:37 crc kubenswrapper[4858]: I1205 14:53:37.899074 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:53:37 crc kubenswrapper[4858]: E1205 14:53:37.899899 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 14:53:52 crc kubenswrapper[4858]: I1205 14:53:52.900219 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:53:53 crc kubenswrapper[4858]: I1205 14:53:53.167671 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"94feaac31b8084a4c9c8b1f276d2f86b32f1ae29a3dc586cf0bbd4c277523660"} Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.081179 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvqb9"] Dec 05 14:55:19 crc kubenswrapper[4858]: E1205 14:55:19.083352 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="extract-content" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.083469 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="extract-content" Dec 05 14:55:19 crc kubenswrapper[4858]: E1205 14:55:19.083601 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="registry-server" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.083612 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="registry-server" Dec 05 14:55:19 crc kubenswrapper[4858]: E1205 14:55:19.083629 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="extract-utilities" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.083637 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="extract-utilities" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.084283 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2b39fe-bfef-43bf-af8d-7c02aa525fe3" containerName="registry-server" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.087803 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.228462 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcvh\" (UniqueName: \"kubernetes.io/projected/a4a167ff-fe76-45c6-b01a-a815deabf210-kube-api-access-swcvh\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.228545 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-catalog-content\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.228848 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-utilities\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.316443 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvqb9"] Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.330909 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-utilities\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.330980 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swcvh\" (UniqueName: \"kubernetes.io/projected/a4a167ff-fe76-45c6-b01a-a815deabf210-kube-api-access-swcvh\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.331022 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-catalog-content\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.333430 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-catalog-content\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.333433 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-utilities\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.391686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swcvh\" (UniqueName: \"kubernetes.io/projected/a4a167ff-fe76-45c6-b01a-a815deabf210-kube-api-access-swcvh\") pod \"community-operators-lvqb9\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:19 crc kubenswrapper[4858]: I1205 14:55:19.417142 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:20 crc kubenswrapper[4858]: I1205 14:55:20.256244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvqb9"] Dec 05 14:55:20 crc kubenswrapper[4858]: E1205 14:55:20.723687 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a167ff_fe76_45c6_b01a_a815deabf210.slice/crio-conmon-cfcd51060e3e3de5341228c5bb6ddefb357fa57b99e62adc7a58281784a4e1f1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a167ff_fe76_45c6_b01a_a815deabf210.slice/crio-cfcd51060e3e3de5341228c5bb6ddefb357fa57b99e62adc7a58281784a4e1f1.scope\": RecentStats: unable to find data in memory cache]" Dec 05 14:55:21 crc kubenswrapper[4858]: I1205 14:55:21.006576 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerID="cfcd51060e3e3de5341228c5bb6ddefb357fa57b99e62adc7a58281784a4e1f1" exitCode=0 Dec 05 14:55:21 crc kubenswrapper[4858]: I1205 14:55:21.007028 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerDied","Data":"cfcd51060e3e3de5341228c5bb6ddefb357fa57b99e62adc7a58281784a4e1f1"} Dec 05 14:55:21 crc kubenswrapper[4858]: I1205 14:55:21.007673 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerStarted","Data":"e827b78c134e1b73488fe2923bdc647de886158bdd41b0ab1eeddf0ad9403708"} Dec 05 14:55:21 crc kubenswrapper[4858]: I1205 14:55:21.010696 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 14:55:22 crc kubenswrapper[4858]: I1205 14:55:22.017844 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerStarted","Data":"3d0b81552b9c7adb7801248775f0a3fe2215b8ba0138a5015c22bb5e07f41c44"} Dec 05 14:55:24 crc kubenswrapper[4858]: I1205 14:55:24.035677 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerID="3d0b81552b9c7adb7801248775f0a3fe2215b8ba0138a5015c22bb5e07f41c44" exitCode=0 Dec 05 14:55:24 crc kubenswrapper[4858]: I1205 14:55:24.035761 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerDied","Data":"3d0b81552b9c7adb7801248775f0a3fe2215b8ba0138a5015c22bb5e07f41c44"} Dec 05 14:55:26 crc kubenswrapper[4858]: I1205 14:55:26.054662 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerStarted","Data":"e38c262b42469b3164cbc0f3b3bf6a47d1a39f624fd084aaa4c09d7146beeed7"} Dec 05 14:55:26 crc kubenswrapper[4858]: I1205 14:55:26.166059 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvqb9" podStartSLOduration=4.174492259 podStartE2EDuration="8.163922667s" podCreationTimestamp="2025-12-05 14:55:18 +0000 UTC" firstStartedPulling="2025-12-05 14:55:21.009280879 +0000 UTC m=+3529.556879008" lastFinishedPulling="2025-12-05 14:55:24.998711277 +0000 UTC m=+3533.546309416" observedRunningTime="2025-12-05 14:55:26.157874444 +0000 UTC m=+3534.705472583" watchObservedRunningTime="2025-12-05 14:55:26.163922667 +0000 UTC m=+3534.711520806" Dec 05 14:55:29 crc kubenswrapper[4858]: I1205 14:55:29.417566 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:29 crc kubenswrapper[4858]: I1205 14:55:29.419374 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:30 crc kubenswrapper[4858]: I1205 14:55:30.486705 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lvqb9" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="registry-server" probeResult="failure" output=< Dec 05 14:55:30 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:55:30 crc kubenswrapper[4858]: > Dec 05 14:55:39 crc kubenswrapper[4858]: I1205 14:55:39.541933 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:39 crc kubenswrapper[4858]: I1205 14:55:39.771651 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:41 crc kubenswrapper[4858]: I1205 14:55:41.347144 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvqb9"] Dec 05 14:55:41 crc kubenswrapper[4858]: I1205 14:55:41.349517 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvqb9" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="registry-server" containerID="cri-o://e38c262b42469b3164cbc0f3b3bf6a47d1a39f624fd084aaa4c09d7146beeed7" gracePeriod=2 Dec 05 14:55:42 crc kubenswrapper[4858]: I1205 14:55:42.208147 4858 generic.go:334] "Generic (PLEG): container finished" podID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerID="e38c262b42469b3164cbc0f3b3bf6a47d1a39f624fd084aaa4c09d7146beeed7" exitCode=0 Dec 05 14:55:42 crc kubenswrapper[4858]: I1205 14:55:42.208482 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerDied","Data":"e38c262b42469b3164cbc0f3b3bf6a47d1a39f624fd084aaa4c09d7146beeed7"} Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.236826 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvqb9" event={"ID":"a4a167ff-fe76-45c6-b01a-a815deabf210","Type":"ContainerDied","Data":"e827b78c134e1b73488fe2923bdc647de886158bdd41b0ab1eeddf0ad9403708"} Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.237484 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e827b78c134e1b73488fe2923bdc647de886158bdd41b0ab1eeddf0ad9403708" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.355802 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.465440 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-utilities\") pod \"a4a167ff-fe76-45c6-b01a-a815deabf210\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.466021 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swcvh\" (UniqueName: \"kubernetes.io/projected/a4a167ff-fe76-45c6-b01a-a815deabf210-kube-api-access-swcvh\") pod \"a4a167ff-fe76-45c6-b01a-a815deabf210\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.466080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-catalog-content\") pod \"a4a167ff-fe76-45c6-b01a-a815deabf210\" (UID: \"a4a167ff-fe76-45c6-b01a-a815deabf210\") " Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.467924 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-utilities" (OuterVolumeSpecName: "utilities") pod "a4a167ff-fe76-45c6-b01a-a815deabf210" (UID: "a4a167ff-fe76-45c6-b01a-a815deabf210"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.482318 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a167ff-fe76-45c6-b01a-a815deabf210-kube-api-access-swcvh" (OuterVolumeSpecName: "kube-api-access-swcvh") pod "a4a167ff-fe76-45c6-b01a-a815deabf210" (UID: "a4a167ff-fe76-45c6-b01a-a815deabf210"). InnerVolumeSpecName "kube-api-access-swcvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.539249 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4a167ff-fe76-45c6-b01a-a815deabf210" (UID: "a4a167ff-fe76-45c6-b01a-a815deabf210"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.568959 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swcvh\" (UniqueName: \"kubernetes.io/projected/a4a167ff-fe76-45c6-b01a-a815deabf210-kube-api-access-swcvh\") on node \"crc\" DevicePath \"\"" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.569007 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:55:43 crc kubenswrapper[4858]: I1205 14:55:43.569021 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4a167ff-fe76-45c6-b01a-a815deabf210-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:55:44 crc kubenswrapper[4858]: I1205 14:55:44.245441 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvqb9" Dec 05 14:55:44 crc kubenswrapper[4858]: I1205 14:55:44.272210 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvqb9"] Dec 05 14:55:44 crc kubenswrapper[4858]: I1205 14:55:44.283959 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvqb9"] Dec 05 14:55:45 crc kubenswrapper[4858]: I1205 14:55:45.909958 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" path="/var/lib/kubelet/pods/a4a167ff-fe76-45c6-b01a-a815deabf210/volumes" Dec 05 14:56:01 crc kubenswrapper[4858]: I1205 14:56:01.641189 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" podUID="ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:56:14 crc kubenswrapper[4858]: I1205 14:56:14.759772 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:56:14 crc kubenswrapper[4858]: I1205 14:56:14.804792 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:56:44 crc kubenswrapper[4858]: I1205 14:56:44.760860 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:56:44 crc kubenswrapper[4858]: I1205 14:56:44.761804 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:56:48 crc kubenswrapper[4858]: I1205 14:56:48.797053 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kkb8l"] Dec 05 14:56:48 crc kubenswrapper[4858]: E1205 14:56:48.798808 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="extract-content" Dec 05 14:56:48 crc kubenswrapper[4858]: I1205 14:56:48.798833 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="extract-content" Dec 05 14:56:48 crc kubenswrapper[4858]: E1205 14:56:48.798868 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="registry-server" Dec 05 14:56:48 crc kubenswrapper[4858]: I1205 14:56:48.798875 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="registry-server" Dec 05 14:56:48 crc kubenswrapper[4858]: E1205 14:56:48.798901 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="extract-utilities" Dec 05 14:56:48 crc kubenswrapper[4858]: I1205 14:56:48.798907 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="extract-utilities" Dec 05 14:56:48 crc kubenswrapper[4858]: I1205 14:56:48.799273 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a167ff-fe76-45c6-b01a-a815deabf210" containerName="registry-server" Dec 05 14:56:48 crc kubenswrapper[4858]: I1205 14:56:48.802006 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.106218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rx6v\" (UniqueName: \"kubernetes.io/projected/25de7759-00a5-4912-bad3-fe1d44d10a0c-kube-api-access-4rx6v\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.106606 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-catalog-content\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.106821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-utilities\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.208020 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkb8l"] Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.209093 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-catalog-content\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.209196 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-utilities\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.209352 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rx6v\" (UniqueName: \"kubernetes.io/projected/25de7759-00a5-4912-bad3-fe1d44d10a0c-kube-api-access-4rx6v\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.211894 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-catalog-content\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.212304 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-utilities\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.267185 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rx6v\" (UniqueName: \"kubernetes.io/projected/25de7759-00a5-4912-bad3-fe1d44d10a0c-kube-api-access-4rx6v\") pod \"redhat-operators-kkb8l\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:49 crc kubenswrapper[4858]: I1205 14:56:49.428468 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:56:50 crc kubenswrapper[4858]: I1205 14:56:50.255714 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkb8l"] Dec 05 14:56:51 crc kubenswrapper[4858]: I1205 14:56:51.109296 4858 generic.go:334] "Generic (PLEG): container finished" podID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerID="6ece09d919203d89481f0b5d33cd4651f64c8955039c7cf875d4282eda6ac46b" exitCode=0 Dec 05 14:56:51 crc kubenswrapper[4858]: I1205 14:56:51.109555 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerDied","Data":"6ece09d919203d89481f0b5d33cd4651f64c8955039c7cf875d4282eda6ac46b"} Dec 05 14:56:51 crc kubenswrapper[4858]: I1205 14:56:51.110062 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerStarted","Data":"503d51e9af2433f02e378aa3a3c2541ed7b6649b9403737c526d94b9780bbf05"} Dec 05 14:56:52 crc kubenswrapper[4858]: I1205 14:56:52.121027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerStarted","Data":"73490784f49c910d90d4d93af6906515b9fa9ee64bb6188dd0b38529dbbe11e5"} Dec 05 14:56:59 crc kubenswrapper[4858]: I1205 14:56:59.183275 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerDied","Data":"73490784f49c910d90d4d93af6906515b9fa9ee64bb6188dd0b38529dbbe11e5"} Dec 05 14:56:59 crc kubenswrapper[4858]: I1205 14:56:59.183429 4858 generic.go:334] "Generic (PLEG): container finished" podID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerID="73490784f49c910d90d4d93af6906515b9fa9ee64bb6188dd0b38529dbbe11e5" exitCode=0 Dec 05 14:57:00 crc kubenswrapper[4858]: I1205 14:57:00.194910 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerStarted","Data":"4571a6afc3d8b925d078031d710ba1269247316448208d7ced6a6d2dc39c47db"} Dec 05 14:57:09 crc kubenswrapper[4858]: I1205 14:57:09.429629 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:57:09 crc kubenswrapper[4858]: I1205 14:57:09.430353 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:57:10 crc kubenswrapper[4858]: I1205 14:57:10.519709 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkb8l" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" probeResult="failure" output=< Dec 05 14:57:10 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:57:10 crc kubenswrapper[4858]: > Dec 05 14:57:13 crc kubenswrapper[4858]: I1205 14:57:13.558379 4858 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fgpw2 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 14:57:13 crc kubenswrapper[4858]: I1205 14:57:13.559062 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" podUID="6e6696fd-dfa5-4863-ae4f-bac4c2379404" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 14:57:13 crc kubenswrapper[4858]: I1205 14:57:13.725029 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:57:13 crc kubenswrapper[4858]: I1205 14:57:13.725789 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.759611 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.759661 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.760516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.761929 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94feaac31b8084a4c9c8b1f276d2f86b32f1ae29a3dc586cf0bbd4c277523660"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.762408 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://94feaac31b8084a4c9c8b1f276d2f86b32f1ae29a3dc586cf0bbd4c277523660" gracePeriod=600 Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.796711 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 14:57:14 crc kubenswrapper[4858]: I1205 14:57:14.797247 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 14:57:15 crc kubenswrapper[4858]: I1205 14:57:15.351111 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="94feaac31b8084a4c9c8b1f276d2f86b32f1ae29a3dc586cf0bbd4c277523660" exitCode=0 Dec 05 14:57:15 crc kubenswrapper[4858]: I1205 14:57:15.351470 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"94feaac31b8084a4c9c8b1f276d2f86b32f1ae29a3dc586cf0bbd4c277523660"} Dec 05 14:57:15 crc kubenswrapper[4858]: I1205 14:57:15.351703 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003"} Dec 05 14:57:15 crc kubenswrapper[4858]: I1205 14:57:15.352535 4858 scope.go:117] "RemoveContainer" containerID="ebf74bb673c15849e0f1c35f9cf1c4f0cfc1e834679056b3e01c947bc3b3d1ae" Dec 05 14:57:15 crc kubenswrapper[4858]: I1205 14:57:15.525901 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kkb8l" podStartSLOduration=19.045986398 podStartE2EDuration="27.525639724s" podCreationTimestamp="2025-12-05 14:56:48 +0000 UTC" firstStartedPulling="2025-12-05 14:56:51.111382024 +0000 UTC m=+3619.658980163" lastFinishedPulling="2025-12-05 14:56:59.59103535 +0000 UTC m=+3628.138633489" observedRunningTime="2025-12-05 14:57:00.225203151 +0000 UTC m=+3628.772801320" watchObservedRunningTime="2025-12-05 14:57:15.525639724 +0000 UTC m=+3644.073237863" Dec 05 14:57:21 crc kubenswrapper[4858]: I1205 14:57:21.220419 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkb8l" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" probeResult="failure" output=< Dec 05 14:57:21 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:57:21 crc kubenswrapper[4858]: > Dec 05 14:57:29 crc kubenswrapper[4858]: E1205 14:57:29.498682 4858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.174:60962->38.102.83.174:41641: write tcp 38.102.83.174:60962->38.102.83.174:41641: write: broken pipe Dec 05 14:57:29 crc kubenswrapper[4858]: E1205 14:57:29.498690 4858 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.174:60962->38.102.83.174:41641: read tcp 38.102.83.174:60962->38.102.83.174:41641: read: connection reset by peer Dec 05 14:57:30 crc kubenswrapper[4858]: I1205 14:57:30.513912 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkb8l" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" probeResult="failure" output=< Dec 05 14:57:30 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:57:30 crc kubenswrapper[4858]: > Dec 05 14:57:40 crc kubenswrapper[4858]: I1205 14:57:40.514138 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kkb8l" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" probeResult="failure" output=< Dec 05 14:57:40 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:57:40 crc kubenswrapper[4858]: > Dec 05 14:57:49 crc kubenswrapper[4858]: I1205 14:57:49.502115 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:57:49 crc kubenswrapper[4858]: I1205 14:57:49.553610 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:57:50 crc kubenswrapper[4858]: I1205 14:57:50.628287 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkb8l"] Dec 05 14:57:50 crc kubenswrapper[4858]: I1205 14:57:50.649846 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kkb8l" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" containerID="cri-o://4571a6afc3d8b925d078031d710ba1269247316448208d7ced6a6d2dc39c47db" gracePeriod=2 Dec 05 14:57:51 crc kubenswrapper[4858]: I1205 14:57:51.726675 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerDied","Data":"4571a6afc3d8b925d078031d710ba1269247316448208d7ced6a6d2dc39c47db"} Dec 05 14:57:51 crc kubenswrapper[4858]: I1205 14:57:51.728567 4858 generic.go:334] "Generic (PLEG): container finished" podID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerID="4571a6afc3d8b925d078031d710ba1269247316448208d7ced6a6d2dc39c47db" exitCode=0 Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.049034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.234178 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rx6v\" (UniqueName: \"kubernetes.io/projected/25de7759-00a5-4912-bad3-fe1d44d10a0c-kube-api-access-4rx6v\") pod \"25de7759-00a5-4912-bad3-fe1d44d10a0c\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.234227 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-catalog-content\") pod \"25de7759-00a5-4912-bad3-fe1d44d10a0c\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.234362 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-utilities\") pod \"25de7759-00a5-4912-bad3-fe1d44d10a0c\" (UID: \"25de7759-00a5-4912-bad3-fe1d44d10a0c\") " Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.237613 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-utilities" (OuterVolumeSpecName: "utilities") pod "25de7759-00a5-4912-bad3-fe1d44d10a0c" (UID: "25de7759-00a5-4912-bad3-fe1d44d10a0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.262302 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25de7759-00a5-4912-bad3-fe1d44d10a0c-kube-api-access-4rx6v" (OuterVolumeSpecName: "kube-api-access-4rx6v") pod "25de7759-00a5-4912-bad3-fe1d44d10a0c" (UID: "25de7759-00a5-4912-bad3-fe1d44d10a0c"). InnerVolumeSpecName "kube-api-access-4rx6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.337190 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rx6v\" (UniqueName: \"kubernetes.io/projected/25de7759-00a5-4912-bad3-fe1d44d10a0c-kube-api-access-4rx6v\") on node \"crc\" DevicePath \"\"" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.337467 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.383457 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25de7759-00a5-4912-bad3-fe1d44d10a0c" (UID: "25de7759-00a5-4912-bad3-fe1d44d10a0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.439106 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25de7759-00a5-4912-bad3-fe1d44d10a0c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.739284 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkb8l" event={"ID":"25de7759-00a5-4912-bad3-fe1d44d10a0c","Type":"ContainerDied","Data":"503d51e9af2433f02e378aa3a3c2541ed7b6649b9403737c526d94b9780bbf05"} Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.739350 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkb8l" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.739717 4858 scope.go:117] "RemoveContainer" containerID="4571a6afc3d8b925d078031d710ba1269247316448208d7ced6a6d2dc39c47db" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.780123 4858 scope.go:117] "RemoveContainer" containerID="73490784f49c910d90d4d93af6906515b9fa9ee64bb6188dd0b38529dbbe11e5" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.782948 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkb8l"] Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.817760 4858 scope.go:117] "RemoveContainer" containerID="6ece09d919203d89481f0b5d33cd4651f64c8955039c7cf875d4282eda6ac46b" Dec 05 14:57:52 crc kubenswrapper[4858]: I1205 14:57:52.832836 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kkb8l"] Dec 05 14:57:53 crc kubenswrapper[4858]: I1205 14:57:53.932961 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" path="/var/lib/kubelet/pods/25de7759-00a5-4912-bad3-fe1d44d10a0c/volumes" Dec 05 14:58:12 crc kubenswrapper[4858]: E1205 14:58:12.169658 4858 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.174:38674->38.102.83.174:41641: read tcp 38.102.83.174:38674->38.102.83.174:41641: read: connection reset by peer Dec 05 14:58:12 crc kubenswrapper[4858]: E1205 14:58:12.170876 4858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.174:38674->38.102.83.174:41641: write tcp 38.102.83.174:38674->38.102.83.174:41641: write: broken pipe Dec 05 14:58:25 crc kubenswrapper[4858]: I1205 14:58:25.859671 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-fdcff888c-psnlc" podUID="3ab446a1-c4b7-40c6-879b-f0f90f4b8559" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.684212 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7pjjs"] Dec 05 14:59:39 crc kubenswrapper[4858]: E1205 14:59:39.690385 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="extract-utilities" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.690571 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="extract-utilities" Dec 05 14:59:39 crc kubenswrapper[4858]: E1205 14:59:39.690934 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="extract-content" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.690948 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="extract-content" Dec 05 14:59:39 crc kubenswrapper[4858]: E1205 14:59:39.690962 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.691196 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.692136 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="25de7759-00a5-4912-bad3-fe1d44d10a0c" containerName="registry-server" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.697365 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.772244 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-catalog-content\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.772471 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgkf\" (UniqueName: \"kubernetes.io/projected/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-kube-api-access-lkgkf\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.772519 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-utilities\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.874006 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-catalog-content\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.874337 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgkf\" (UniqueName: \"kubernetes.io/projected/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-kube-api-access-lkgkf\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.874371 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-utilities\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.876546 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-utilities\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.876944 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-catalog-content\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.906244 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgkf\" (UniqueName: \"kubernetes.io/projected/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-kube-api-access-lkgkf\") pod \"certified-operators-7pjjs\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:39 crc kubenswrapper[4858]: I1205 14:59:39.913657 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pjjs"] Dec 05 14:59:40 crc kubenswrapper[4858]: I1205 14:59:40.019763 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:40 crc kubenswrapper[4858]: I1205 14:59:40.961065 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7pjjs"] Dec 05 14:59:41 crc kubenswrapper[4858]: I1205 14:59:41.714273 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerDied","Data":"45292c718083118d2e807bf69d7cb1d16f3d378624614a4b97cef28fd49e28e9"} Dec 05 14:59:41 crc kubenswrapper[4858]: I1205 14:59:41.714365 4858 generic.go:334] "Generic (PLEG): container finished" podID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerID="45292c718083118d2e807bf69d7cb1d16f3d378624614a4b97cef28fd49e28e9" exitCode=0 Dec 05 14:59:41 crc kubenswrapper[4858]: I1205 14:59:41.714606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerStarted","Data":"ed259a30a80cbde0ed274c727246e545f3c67f1dc70f12a5c818a31c446015a0"} Dec 05 14:59:42 crc kubenswrapper[4858]: I1205 14:59:42.755756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerStarted","Data":"05a46aa86dfcfc9fedf98467898174b4304b496df789ee8b21e6253249989698"} Dec 05 14:59:44 crc kubenswrapper[4858]: I1205 14:59:44.760614 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 14:59:44 crc kubenswrapper[4858]: I1205 14:59:44.761653 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 14:59:44 crc kubenswrapper[4858]: I1205 14:59:44.793465 4858 generic.go:334] "Generic (PLEG): container finished" podID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerID="05a46aa86dfcfc9fedf98467898174b4304b496df789ee8b21e6253249989698" exitCode=0 Dec 05 14:59:44 crc kubenswrapper[4858]: I1205 14:59:44.793505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerDied","Data":"05a46aa86dfcfc9fedf98467898174b4304b496df789ee8b21e6253249989698"} Dec 05 14:59:45 crc kubenswrapper[4858]: I1205 14:59:45.802253 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerStarted","Data":"d9251d32eae61fa1ca88fade77aff46b959e0e7e49c05e2791b84fc8d7e0e45d"} Dec 05 14:59:50 crc kubenswrapper[4858]: I1205 14:59:50.020627 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:50 crc kubenswrapper[4858]: I1205 14:59:50.021243 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 14:59:51 crc kubenswrapper[4858]: I1205 14:59:51.159872 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7pjjs" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="registry-server" probeResult="failure" output=< Dec 05 14:59:51 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 14:59:51 crc kubenswrapper[4858]: > Dec 05 15:00:00 crc kubenswrapper[4858]: I1205 15:00:00.366516 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 15:00:00 crc kubenswrapper[4858]: I1205 15:00:00.568651 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 15:00:01 crc kubenswrapper[4858]: I1205 15:00:01.174903 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7pjjs" podStartSLOduration=18.666575725 podStartE2EDuration="22.173688187s" podCreationTimestamp="2025-12-05 14:59:39 +0000 UTC" firstStartedPulling="2025-12-05 14:59:41.716364369 +0000 UTC m=+3790.263962508" lastFinishedPulling="2025-12-05 14:59:45.223476831 +0000 UTC m=+3793.771074970" observedRunningTime="2025-12-05 14:59:45.826815551 +0000 UTC m=+3794.374413690" watchObservedRunningTime="2025-12-05 15:00:01.173688187 +0000 UTC m=+3809.721286326" Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.159359 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.160589 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.159353 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.162017 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.286058 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.286074 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.286141 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.286142 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.989013 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:02 crc kubenswrapper[4858]: I1205 15:00:02.989013 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.179420 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr"] Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.183994 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.239586 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-config-volume\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.239691 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfm2\" (UniqueName: \"kubernetes.io/projected/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-kube-api-access-zrfm2\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.239781 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-secret-volume\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.254211 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.254216 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.342144 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-config-volume\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.342737 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfm2\" (UniqueName: \"kubernetes.io/projected/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-kube-api-access-zrfm2\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.342938 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-secret-volume\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.346099 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-config-volume\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.361972 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-secret-volume\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.369883 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfm2\" (UniqueName: \"kubernetes.io/projected/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-kube-api-access-zrfm2\") pod \"collect-profiles-29415780-snbhr\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.516478 4858 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fgpw2 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.517055 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" podUID="6e6696fd-dfa5-4863-ae4f-bac4c2379404" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.520000 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.577276 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pjjs"] Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.579775 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7pjjs" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="registry-server" containerID="cri-o://d9251d32eae61fa1ca88fade77aff46b959e0e7e49c05e2791b84fc8d7e0e45d" gracePeriod=2 Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.725138 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:03 crc kubenswrapper[4858]: I1205 15:00:03.725263 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:04 crc kubenswrapper[4858]: I1205 15:00:04.372997 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr"] Dec 05 15:00:04 crc kubenswrapper[4858]: I1205 15:00:04.900407 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:04 crc kubenswrapper[4858]: I1205 15:00:04.904897 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:04 crc kubenswrapper[4858]: I1205 15:00:04.904900 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:05 crc kubenswrapper[4858]: I1205 15:00:05.001055 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerDied","Data":"d9251d32eae61fa1ca88fade77aff46b959e0e7e49c05e2791b84fc8d7e0e45d"} Dec 05 15:00:05 crc kubenswrapper[4858]: I1205 15:00:05.000979 4858 generic.go:334] "Generic (PLEG): container finished" podID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerID="d9251d32eae61fa1ca88fade77aff46b959e0e7e49c05e2791b84fc8d7e0e45d" exitCode=0 Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.172053 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.246847 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-utilities\") pod \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.246961 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-catalog-content\") pod \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.247074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkgkf\" (UniqueName: \"kubernetes.io/projected/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-kube-api-access-lkgkf\") pod \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\" (UID: \"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a\") " Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.248538 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-utilities" (OuterVolumeSpecName: "utilities") pod "3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" (UID: "3091ef09-cd3a-47f9-bd2e-564f73bb4a4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.275384 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-kube-api-access-lkgkf" (OuterVolumeSpecName: "kube-api-access-lkgkf") pod "3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" (UID: "3091ef09-cd3a-47f9-bd2e-564f73bb4a4a"). InnerVolumeSpecName "kube-api-access-lkgkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.349553 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.349893 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkgkf\" (UniqueName: \"kubernetes.io/projected/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-kube-api-access-lkgkf\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.350952 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" (UID: "3091ef09-cd3a-47f9-bd2e-564f73bb4a4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.452190 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:06 crc kubenswrapper[4858]: I1205 15:00:06.576251 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr"] Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.050882 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" event={"ID":"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f","Type":"ContainerStarted","Data":"2747d4b8f335fe2bb964f08e33e1c187675b7052bb80a92837e6e0adbf195c1a"} Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.051817 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" event={"ID":"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f","Type":"ContainerStarted","Data":"2aabf82e982ae951323380f355b78432a92d6792dcbed5a382e8f347d108326e"} Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.055140 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7pjjs" event={"ID":"3091ef09-cd3a-47f9-bd2e-564f73bb4a4a","Type":"ContainerDied","Data":"ed259a30a80cbde0ed274c727246e545f3c67f1dc70f12a5c818a31c446015a0"} Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.055194 4858 scope.go:117] "RemoveContainer" containerID="d9251d32eae61fa1ca88fade77aff46b959e0e7e49c05e2791b84fc8d7e0e45d" Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.055435 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7pjjs" Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.078933 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" podStartSLOduration=6.078603416 podStartE2EDuration="6.078603416s" podCreationTimestamp="2025-12-05 15:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 15:00:07.070423495 +0000 UTC m=+3815.618021644" watchObservedRunningTime="2025-12-05 15:00:07.078603416 +0000 UTC m=+3815.626201555" Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.090031 4858 scope.go:117] "RemoveContainer" containerID="05a46aa86dfcfc9fedf98467898174b4304b496df789ee8b21e6253249989698" Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.124375 4858 scope.go:117] "RemoveContainer" containerID="45292c718083118d2e807bf69d7cb1d16f3d378624614a4b97cef28fd49e28e9" Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.164457 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7pjjs"] Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.181455 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7pjjs"] Dec 05 15:00:07 crc kubenswrapper[4858]: I1205 15:00:07.912012 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" path="/var/lib/kubelet/pods/3091ef09-cd3a-47f9-bd2e-564f73bb4a4a/volumes" Dec 05 15:00:09 crc kubenswrapper[4858]: I1205 15:00:09.377578 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" podUID="992029c2-7acc-4f87-b054-4a062babc670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:09 crc kubenswrapper[4858]: I1205 15:00:09.401788 4858 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:09 crc kubenswrapper[4858]: I1205 15:00:09.401901 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:09 crc kubenswrapper[4858]: I1205 15:00:09.853520 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:10 crc kubenswrapper[4858]: I1205 15:00:10.692040 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.160254 4858 patch_prober.go:28] interesting pod/console-85b6884698-jg67f container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.160328 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-85b6884698-jg67f" podUID="edd4d801-d89a-48f7-a598-9011f83ceefd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.285985 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.286038 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.286437 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.286456 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.363976 4858 patch_prober.go:28] interesting pod/nmstate-webhook-5f6d4c5ccb-mz5j7 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.27:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.364036 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" podUID="4b3d39ce-7f49-470b-af52-6895f872f60d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.27:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.366273 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.707013 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" podUID="ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:11 crc kubenswrapper[4858]: I1205 15:00:11.707013 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" podUID="ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:12 crc kubenswrapper[4858]: I1205 15:00:12.152168 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:12 crc kubenswrapper[4858]: I1205 15:00:12.152498 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:12 crc kubenswrapper[4858]: I1205 15:00:12.152339 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:12 crc kubenswrapper[4858]: I1205 15:00:12.152673 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:12 crc kubenswrapper[4858]: I1205 15:00:12.988057 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:12 crc kubenswrapper[4858]: I1205 15:00:12.988760 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.371570 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-4n4r2" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:13 crc kubenswrapper[4858]: > Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.371765 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-mhrc4" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:13 crc kubenswrapper[4858]: > Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.517370 4858 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fgpw2 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.517719 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" podUID="6e6696fd-dfa5-4863-ae4f-bac4c2379404" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.671980 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-qbj7t" podUID="b87af213-3539-45a1-bbe5-c4fd1161ff1b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:13 crc kubenswrapper[4858]: > Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.672013 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-qbj7t" podUID="b87af213-3539-45a1-bbe5-c4fd1161ff1b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:13 crc kubenswrapper[4858]: > Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.725157 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.725733 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.801399 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.958232 4858 patch_prober.go:28] interesting pod/route-controller-manager-759f757447-m6wpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.958285 4858 patch_prober.go:28] interesting pod/route-controller-manager-759f757447-m6wpn container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.958311 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" podUID="2e76c9b7-a280-482b-bd9f-507fd2950dc6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.958370 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" podUID="2e76c9b7-a280-482b-bd9f-507fd2950dc6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.961293 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-mhrc4" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:13 crc kubenswrapper[4858]: > Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.966424 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-4n4r2" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:13 crc kubenswrapper[4858]: > Dec 05 15:00:13 crc kubenswrapper[4858]: I1205 15:00:13.971492 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.799370 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.800546 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830035 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830100 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830164 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" podUID="4c9d3c6a-fda7-468e-9099-5f09c2dbdbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830193 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830206 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830506 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830557 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:00:14 crc kubenswrapper[4858]: I1205 15:00:14.830515 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" podUID="4c9d3c6a-fda7-468e-9099-5f09c2dbdbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.019665 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.020132 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.019686 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.020185 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.042261 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.042316 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.042497 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.042518 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.191083 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" podUID="19f67bc9-5b77-4904-9aaf-8dbd7877d30d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.191166 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" podUID="19f67bc9-5b77-4904-9aaf-8dbd7877d30d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.191193 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.221637 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.221677 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.221696 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.221716 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.795590 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="709c2e19-3180-41ef-9341-df5e95e1733a" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.795648 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="d1147ad4-1af3-4e6e-8b0d-a26db8d0af74" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.795741 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="d1147ad4-1af3-4e6e-8b0d-a26db8d0af74" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 05 15:00:15 crc kubenswrapper[4858]: I1205 15:00:15.795784 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="709c2e19-3180-41ef-9341-df5e95e1733a" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:17 crc kubenswrapper[4858]: I1205 15:00:17.283505 4858 trace.go:236] Trace[1968944825]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-4nzbm" (05-Dec-2025 15:00:07.764) (total time: 9517ms): Dec 05 15:00:17 crc kubenswrapper[4858]: Trace[1968944825]: [9.517464194s] [9.517464194s] END Dec 05 15:00:17 crc kubenswrapper[4858]: I1205 15:00:17.286125 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:17 crc kubenswrapper[4858]: I1205 15:00:17.286352 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:17 crc kubenswrapper[4858]: I1205 15:00:17.286125 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:17 crc kubenswrapper[4858]: I1205 15:00:17.286596 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.416114 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" podUID="daaa12d2-f682-4ef8-b225-ca15ff2076ba" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.417035 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" podUID="daaa12d2-f682-4ef8-b225-ca15ff2076ba" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.763005 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" podUID="f46597a6-55e2-49fa-8ee8-6fe7db5be4cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.65:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.763015 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-92n7j" podUID="f46597a6-55e2-49fa-8ee8-6fe7db5be4cb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.65:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.799940 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.799944 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.855220 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" podUID="c71e1565-e737-42ce-b309-29b487e26853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:18 crc kubenswrapper[4858]: I1205 15:00:18.855221 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-6rlkv" podUID="c71e1565-e737-42ce-b309-29b487e26853" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.358804 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" podUID="1b6160ac-d6c8-448d-b849-4b0455cec2c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.442150 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4fptm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.442192 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-lst9j" podUID="1b6160ac-d6c8-448d-b849-4b0455cec2c1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.442219 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" podUID="ff2db84d-03a9-4c8e-9584-aeafa84ead17" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.442485 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" podUID="f482f790-9250-42a9-b5a5-e0509b1b0e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.609174 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-jscs5" podUID="f482f790-9250-42a9-b5a5-e0509b1b0e10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.609170 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" podUID="f33ab949-382d-454e-9c4a-6e636a1f4bdc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.650185 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4lcwv" podUID="66f3a723-6f38-4b27-9363-bbe77135d954" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.650491 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-f8648f98b-wf646" podUID="ad1cb414-76a1-4dba-a006-9fec16fbf90d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.650579 4858 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.650584 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-tbh8l" podUID="7f9fa0fa-c2f8-4624-849e-088b48b9e71d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.650607 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.654940 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-f8648f98b-wf646" podUID="ad1cb414-76a1-4dba-a006-9fec16fbf90d" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.654992 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8tvrh" podUID="29cf74b8-eb6d-4655-876e-10e917166426" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.655016 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" podUID="34b5ac68-a347-4e14-b678-371378c55b7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.738028 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" podUID="e033dea2-183c-4853-b77e-e77857882a4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.738120 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" podUID="34b5ac68-a347-4e14-b678-371378c55b7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.801961 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4fptm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.802012 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" podUID="ff2db84d-03a9-4c8e-9584-aeafa84ead17" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.802071 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" podUID="59405248-ef7c-4944-a9a4-724e24cf22af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.802104 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-tfs6p" podUID="f33ab949-382d-454e-9c4a-6e636a1f4bdc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.802727 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.803372 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-78f8948974-xpqrm" podUID="e033dea2-183c-4853-b77e-e77857882a4d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.803489 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.805472 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"714fab6a4b4ed795f4c07ad114c7088986813b0085cdbf2109f32a7e1c39a10a"} pod="hostpath-provisioner/csi-hostpathplugin-l27jv" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.805922 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" containerID="cri-o://714fab6a4b4ed795f4c07ad114c7088986813b0085cdbf2109f32a7e1c39a10a" gracePeriod=30 Dec 05 15:00:19 crc kubenswrapper[4858]: I1205 15:00:19.885246 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" podUID="aa187928-b3b8-40e6-b60b-19d84781e34c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:19.885531 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-c8s9k" podUID="59405248-ef7c-4944-a9a4-724e24cf22af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:19.885803 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" podUID="aa187928-b3b8-40e6-b60b-19d84781e34c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.447761 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podUID="5401bf83-09b5-464f-b52c-210a3fa92aa1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.447873 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.447899 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.448122 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podUID="5401bf83-09b5-464f-b52c-210a3fa92aa1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.692088 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.795609 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="d1147ad4-1af3-4e6e-8b0d-a26db8d0af74" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.795692 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="d1147ad4-1af3-4e6e-8b0d-a26db8d0af74" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.817005 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" podUID="a181bba4-2682-4d6a-90cc-12bea5e07d34" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.857969 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" podUID="a181bba4-2682-4d6a-90cc-12bea5e07d34" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.857969 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:20 crc kubenswrapper[4858]: I1205 15:00:20.858079 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.117478 4858 patch_prober.go:28] interesting pod/console-85b6884698-jg67f container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.117558 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-85b6884698-jg67f" podUID="edd4d801-d89a-48f7-a598-9011f83ceefd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.160250 4858 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.160318 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.343665 4858 patch_prober.go:28] interesting pod/nmstate-webhook-5f6d4c5ccb-mz5j7 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.27:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.344017 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" podUID="4b3d39ce-7f49-470b-af52-6895f872f60d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.27:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.664970 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7688b5f8b9-9sgf5" podUID="ad4a9f4e-080d-43f5-8e3e-6bb24ac1a456" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:21 crc kubenswrapper[4858]: I1205 15:00:21.796561 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-f2tv5" podUID="1bd07ab3-c5f4-4922-b5b3-5f7a0549fec1" containerName="nmstate-handler" probeResult="failure" output="command timed out" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.153580 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.153641 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.153686 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.153590 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.153790 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.154042 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.157598 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"628c28a71c96308f3626201d8d7aee781a0c8fa9fa268e3c311e5b9ebf668ae9"} pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.554077 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="88978087-6caa-487b-8425-40fc1b70ced8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.554078 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="88978087-6caa-487b-8425-40fc1b70ced8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.554991 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="34c521aa-4339-4571-9168-f2939e083ea5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.203:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.555015 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="34c521aa-4339-4571-9168-f2939e083ea5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.203:8080/livez\": context deadline exceeded" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.992013 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.992061 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.992150 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4bmzv" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.992197 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-4bmzv" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.993300 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"f30eb0411f96373a31aad38c85cb8a89bb020a15fd91cac1d08aba91e4a9159f"} pod="metallb-system/speaker-4bmzv" containerMessage="Container speaker failed liveness probe, will be restarted" Dec 05 15:00:22 crc kubenswrapper[4858]: I1205 15:00:22.993386 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-4bmzv" podUID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerName="speaker" containerID="cri-o://f30eb0411f96373a31aad38c85cb8a89bb020a15fd91cac1d08aba91e4a9159f" gracePeriod=2 Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.288039 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.288106 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.288529 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.288581 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.288624 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.289537 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"5d5e56deb692818aca7f22a2b4d45f29105c2352931b33f27871b2cccbbb1f24"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.289581 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" containerID="cri-o://5d5e56deb692818aca7f22a2b4d45f29105c2352931b33f27871b2cccbbb1f24" gracePeriod=30 Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.564417 4858 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-fgpw2 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.564498 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" podUID="6e6696fd-dfa5-4863-ae4f-bac4c2379404" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.564555 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.565594 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"58d571ce2360f09c4c97f506e1ae78a990c75e358757459f4c39ce12d6d16573"} pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.565653 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" podUID="6e6696fd-dfa5-4863-ae4f-bac4c2379404" containerName="authentication-operator" containerID="cri-o://58d571ce2360f09c4c97f506e1ae78a990c75e358757459f4c39ce12d6d16573" gracePeriod=30 Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.566060 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4bmzv" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.621048 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="048ced77-bd4f-48c2-90f3-13081773f309" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.683137 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.686247 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.686640 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.687739 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"52c402d753cb402fcc292ca85ca222a17c7346314c40be0536250023433b613a"} pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.690168 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" containerID="cri-o://52c402d753cb402fcc292ca85ca222a17c7346314c40be0536250023433b613a" gracePeriod=30 Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.690357 4858 patch_prober.go:28] interesting pod/controller-manager-74b47c9b9-pdvnc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.690465 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" podUID="34b7fa59-6622-4740-aa51-89d994381fe4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.692115 4858 patch_prober.go:28] interesting pod/controller-manager-74b47c9b9-pdvnc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.692182 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-74b47c9b9-pdvnc" podUID="34b7fa59-6622-4740-aa51-89d994381fe4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.958022 4858 patch_prober.go:28] interesting pod/route-controller-manager-759f757447-m6wpn container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:23 crc kubenswrapper[4858]: I1205 15:00:23.958179 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" podUID="2e76c9b7-a280-482b-bd9f-507fd2950dc6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.048019 4858 patch_prober.go:28] interesting pod/route-controller-manager-759f757447-m6wpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.048071 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.048074 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-759f757447-m6wpn" podUID="2e76c9b7-a280-482b-bd9f-507fd2950dc6" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.048012 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-66fb787db8-jqwt8" podUID="f9929d39-1191-4732-a51f-16d2f973bf90" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.787119 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.787407 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.787180 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.787557 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.787117 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" podUID="4c9d3c6a-fda7-468e-9099-5f09c2dbdbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.795885 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.796029 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.796672 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.796740 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.797269 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"4486ae1d027ec02849dcbaaef9604147087bf2cf131fa4e861bc6c695cddbdb1"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.807086 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.807146 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.808064 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"6bee8fb279de218cea32c6d04cd6b0cb46d74c41e4453011ed87d6b58ee12166"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Dec 05 15:00:24 crc kubenswrapper[4858]: I1205 15:00:24.808175 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" containerID="cri-o://6bee8fb279de218cea32c6d04cd6b0cb46d74c41e4453011ed87d6b58ee12166" gracePeriod=30 Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055488 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055545 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055659 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6klpw container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055691 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podUID="e6d32935-4d3d-43c9-b7c7-8735545d39ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055695 4858 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-fhlhr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055732 4858 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-6klpw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055751 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6klpw" podUID="e6d32935-4d3d-43c9-b7c7-8735545d39ba" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055754 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fhlhr" podUID="2950ccec-35ea-4679-8cf6-1a67f52264b4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055780 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055799 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055817 4858 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-l2x7g container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.055870 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-l2x7g" podUID="42ae75c8-e3d2-4328-83ef-4d7279d05abd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.139053 4858 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-hsprq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.139108 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" podUID="50cce18d-88c6-44b7-9a7d-9a9734a2eba2" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.139198 4858 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-hsprq container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.139223 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hsprq" podUID="50cce18d-88c6-44b7-9a7d-9a9734a2eba2" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.180036 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j9kk8" podUID="19f67bc9-5b77-4904-9aaf-8dbd7877d30d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.204297 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.204352 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.297178 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.297239 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.297187 4858 patch_prober.go:28] interesting pod/console-operator-58897d9998-xxk7s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.297291 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xxk7s" podUID="61356f17-0b7f-4482-83f2-5a6d542a4e68" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.460468 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4bmzv" event={"ID":"8c029ca1-2a2b-4983-855f-a9e6d7a7d306","Type":"ContainerDied","Data":"f30eb0411f96373a31aad38c85cb8a89bb020a15fd91cac1d08aba91e4a9159f"} Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.460563 4858 generic.go:334] "Generic (PLEG): container finished" podID="8c029ca1-2a2b-4983-855f-a9e6d7a7d306" containerID="f30eb0411f96373a31aad38c85cb8a89bb020a15fd91cac1d08aba91e4a9159f" exitCode=0 Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.470091 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-qbj7t" podUID="b87af213-3539-45a1-bbe5-c4fd1161ff1b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:25 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:25 crc kubenswrapper[4858]: > Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.471285 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-qbj7t" podUID="b87af213-3539-45a1-bbe5-c4fd1161ff1b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:25 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:25 crc kubenswrapper[4858]: > Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.795132 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="709c2e19-3180-41ef-9341-df5e95e1733a" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.795258 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:25 crc kubenswrapper[4858]: I1205 15:00:25.796386 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="709c2e19-3180-41ef-9341-df5e95e1733a" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.158923 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-mhrc4" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.160140 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-9fbw6" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.160342 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-4n4r2" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.168791 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-4n4r2" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.173988 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-k2hzq" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.174207 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-mhrc4" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.174764 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-k2hzq" podUID="461fbf64-d6a9-4371-a580-1d832c1a8a29" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.175065 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-9fbw6" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="registry-server" probeResult="failure" output=< Dec 05 15:00:26 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:00:26 crc kubenswrapper[4858]: > Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.472141 4858 generic.go:334] "Generic (PLEG): container finished" podID="6e6696fd-dfa5-4863-ae4f-bac4c2379404" containerID="58d571ce2360f09c4c97f506e1ae78a990c75e358757459f4c39ce12d6d16573" exitCode=0 Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.472180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" event={"ID":"6e6696fd-dfa5-4863-ae4f-bac4c2379404","Type":"ContainerDied","Data":"58d571ce2360f09c4c97f506e1ae78a990c75e358757459f4c39ce12d6d16573"} Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.474944 4858 generic.go:334] "Generic (PLEG): container finished" podID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerID="5d5e56deb692818aca7f22a2b4d45f29105c2352931b33f27871b2cccbbb1f24" exitCode=0 Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.475047 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" event={"ID":"db8cbc4d-eadf-4949-9b00-760f67bd0442","Type":"ContainerDied","Data":"5d5e56deb692818aca7f22a2b4d45f29105c2352931b33f27871b2cccbbb1f24"} Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.478144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4bmzv" event={"ID":"8c029ca1-2a2b-4983-855f-a9e6d7a7d306","Type":"ContainerStarted","Data":"9333ac6a51217099445ccc5db92a08b8a73a17a31df285ed39d0d32a91380e73"} Dec 05 15:00:26 crc kubenswrapper[4858]: I1205 15:00:26.478395 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4bmzv" Dec 05 15:00:27 crc kubenswrapper[4858]: I1205 15:00:27.517144 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fgpw2" event={"ID":"6e6696fd-dfa5-4863-ae4f-bac4c2379404","Type":"ContainerStarted","Data":"7e9832eeff2efaa8a593d411f5328256e3e212de9368341a350a66cf8f82e6be"} Dec 05 15:00:27 crc kubenswrapper[4858]: I1205 15:00:27.521912 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb76164b-d338-4395-af71-e6dd098c165f" containerID="52c402d753cb402fcc292ca85ca222a17c7346314c40be0536250023433b613a" exitCode=0 Dec 05 15:00:27 crc kubenswrapper[4858]: I1205 15:00:27.522053 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" event={"ID":"cb76164b-d338-4395-af71-e6dd098c165f","Type":"ContainerDied","Data":"52c402d753cb402fcc292ca85ca222a17c7346314c40be0536250023433b613a"} Dec 05 15:00:27 crc kubenswrapper[4858]: I1205 15:00:27.525887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" event={"ID":"db8cbc4d-eadf-4949-9b00-760f67bd0442","Type":"ContainerStarted","Data":"7579d69384d07254dc8b9816c20d4430ae47c07f5f6914edd9f59f787a527c7c"} Dec 05 15:00:27 crc kubenswrapper[4858]: I1205 15:00:27.525927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 15:00:27 crc kubenswrapper[4858]: I1205 15:00:27.643844 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": dial tcp 10.217.0.73:6080: connect: connection refused" Dec 05 15:00:28 crc kubenswrapper[4858]: I1205 15:00:28.416974 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" podUID="daaa12d2-f682-4ef8-b225-ca15ff2076ba" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:28 crc kubenswrapper[4858]: I1205 15:00:28.417109 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-666bd46db5-6xjlx" podUID="daaa12d2-f682-4ef8-b225-ca15ff2076ba" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:28 crc kubenswrapper[4858]: I1205 15:00:28.535488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" event={"ID":"cb76164b-d338-4395-af71-e6dd098c165f","Type":"ContainerStarted","Data":"a96e3a454b8b9a06f350c0e4193bb25006e2d64729088dbd5ebb28f2e51a1fae"} Dec 05 15:00:28 crc kubenswrapper[4858]: I1205 15:00:28.536559 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": dial tcp 10.217.0.73:6080: connect: connection refused" Dec 05 15:00:28 crc kubenswrapper[4858]: I1205 15:00:28.537856 4858 status_manager.go:317] "Container readiness changed for unknown container" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" containerID="cri-o://52c402d753cb402fcc292ca85ca222a17c7346314c40be0536250023433b613a" Dec 05 15:00:28 crc kubenswrapper[4858]: I1205 15:00:28.537883 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 15:00:29 crc kubenswrapper[4858]: I1205 15:00:29.061487 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-rjkwx" podUID="34b5ac68-a347-4e14-b678-371378c55b7a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:29 crc kubenswrapper[4858]: I1205 15:00:29.061223 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4fptm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:29 crc kubenswrapper[4858]: I1205 15:00:29.061905 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" podUID="ff2db84d-03a9-4c8e-9584-aeafa84ead17" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:29 crc kubenswrapper[4858]: I1205 15:00:29.061493 4858 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4fptm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:29 crc kubenswrapper[4858]: I1205 15:00:29.061956 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4fptm" podUID="ff2db84d-03a9-4c8e-9584-aeafa84ead17" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:29 crc kubenswrapper[4858]: I1205 15:00:29.542989 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.552957 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerID="6bee8fb279de218cea32c6d04cd6b0cb46d74c41e4453011ed87d6b58ee12166" exitCode=0 Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.552996 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerDied","Data":"6bee8fb279de218cea32c6d04cd6b0cb46d74c41e4453011ed87d6b58ee12166"} Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.556387 4858 generic.go:334] "Generic (PLEG): container finished" podID="2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" containerID="2747d4b8f335fe2bb964f08e33e1c187675b7052bb80a92837e6e0adbf195c1a" exitCode=0 Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.556422 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" event={"ID":"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f","Type":"ContainerDied","Data":"2747d4b8f335fe2bb964f08e33e1c187675b7052bb80a92837e6e0adbf195c1a"} Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.691267 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.691652 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-756vt" Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.693328 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"b69b2320211476ae65efb04f92c88b9a653258a7957b8d14810a2a6bfaa50938"} pod="metallb-system/frr-k8s-756vt" containerMessage="Container frr failed liveness probe, will be restarted" Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.693679 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="frr" containerID="cri-o://b69b2320211476ae65efb04f92c88b9a653258a7957b8d14810a2a6bfaa50938" gracePeriod=2 Dec 05 15:00:30 crc kubenswrapper[4858]: I1205 15:00:30.875296 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.204092 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.204640 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.205657 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.205799 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.589021 4858 generic.go:334] "Generic (PLEG): container finished" podID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerID="b69b2320211476ae65efb04f92c88b9a653258a7957b8d14810a2a6bfaa50938" exitCode=143 Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.589111 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerDied","Data":"b69b2320211476ae65efb04f92c88b9a653258a7957b8d14810a2a6bfaa50938"} Dec 05 15:00:31 crc kubenswrapper[4858]: I1205 15:00:31.589412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-756vt" event={"ID":"9a3a124e-0ac1-4f2a-aee6-3cae0fd66576","Type":"ContainerStarted","Data":"939907f69bc200b3130276126275fb57a23cbce6e3d405f9d2ed10c2225ddcb8"} Dec 05 15:00:32 crc kubenswrapper[4858]: I1205 15:00:32.113035 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:00:32 crc kubenswrapper[4858]: I1205 15:00:32.113105 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 15:00:32 crc kubenswrapper[4858]: I1205 15:00:32.604401 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerStarted","Data":"053ccc0ce15b1f852702a00d8370428388c09c958fe134cbdc3179030692f365"} Dec 05 15:00:34 crc kubenswrapper[4858]: I1205 15:00:34.251916 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" Dec 05 15:00:34 crc kubenswrapper[4858]: I1205 15:00:34.650957 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-756vt" Dec 05 15:00:34 crc kubenswrapper[4858]: I1205 15:00:34.801997 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" probeResult="failure" output="command timed out" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.010429 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerName="galera" containerID="cri-o://4486ae1d027ec02849dcbaaef9604147087bf2cf131fa4e861bc6c695cddbdb1" gracePeriod=20 Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.285034 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.296443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-756vt" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.303074 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="eaf87b37-d86c-4788-9768-2b3abf22f309" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.453125 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-secret-volume\") pod \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.453183 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrfm2\" (UniqueName: \"kubernetes.io/projected/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-kube-api-access-zrfm2\") pod \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.453214 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-config-volume\") pod \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\" (UID: \"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f\") " Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.456702 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" (UID: "2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.476008 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" (UID: "2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.477773 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-kube-api-access-zrfm2" (OuterVolumeSpecName: "kube-api-access-zrfm2") pod "2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" (UID: "2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f"). InnerVolumeSpecName "kube-api-access-zrfm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.556305 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.556345 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrfm2\" (UniqueName: \"kubernetes.io/projected/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-kube-api-access-zrfm2\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.556356 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.637719 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.638225 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr" event={"ID":"2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f","Type":"ContainerDied","Data":"2aabf82e982ae951323380f355b78432a92d6792dcbed5a382e8f347d108326e"} Dec 05 15:00:35 crc kubenswrapper[4858]: I1205 15:00:35.638510 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aabf82e982ae951323380f355b78432a92d6792dcbed5a382e8f347d108326e" Dec 05 15:00:36 crc kubenswrapper[4858]: I1205 15:00:36.668265 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr"] Dec 05 15:00:36 crc kubenswrapper[4858]: I1205 15:00:36.682409 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415735-jrxnr"] Dec 05 15:00:37 crc kubenswrapper[4858]: I1205 15:00:37.652474 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" Dec 05 15:00:37 crc kubenswrapper[4858]: I1205 15:00:37.663945 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e","Type":"ContainerDied","Data":"4486ae1d027ec02849dcbaaef9604147087bf2cf131fa4e861bc6c695cddbdb1"} Dec 05 15:00:37 crc kubenswrapper[4858]: I1205 15:00:37.664178 4858 generic.go:334] "Generic (PLEG): container finished" podID="535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e" containerID="4486ae1d027ec02849dcbaaef9604147087bf2cf131fa4e861bc6c695cddbdb1" exitCode=0 Dec 05 15:00:37 crc kubenswrapper[4858]: I1205 15:00:37.665101 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"535bf7fb-3e78-4bdb-8ed6-0f6d3b45d09e","Type":"ContainerStarted","Data":"c7cbc5bd6d08f0ee79e44cefdf4ade8c39caf3134f399474d141f1842e8434c3"} Dec 05 15:00:37 crc kubenswrapper[4858]: I1205 15:00:37.919282 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c072c3bf-87e4-4807-a14f-243c05c3e54d" path="/var/lib/kubelet/pods/c072c3bf-87e4-4807-a14f-243c05c3e54d/volumes" Dec 05 15:00:38 crc kubenswrapper[4858]: I1205 15:00:38.142587 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="eaf87b37-d86c-4788-9768-2b3abf22f309" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.375960 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.377112 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" containerID="cri-o://053ccc0ce15b1f852702a00d8370428388c09c958fe134cbdc3179030692f365" gracePeriod=30 Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.377214 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="proxy-httpd" containerID="cri-o://bbef305c73922336c39bc4a6af66b38c55611fca825f65d600f338e1b67a82d5" gracePeriod=30 Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.377290 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="sg-core" containerID="cri-o://f751d4ff62041ede6966fcbc96230a1c1b6829556d737a8510353c8f90e3f866" gracePeriod=30 Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.377107 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-notification-agent" containerID="cri-o://8eadbbd2abb1905af14eb90a333add42d0e3bd86326e6a86fbf70df4b23b02d3" gracePeriod=30 Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.692906 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerID="f751d4ff62041ede6966fcbc96230a1c1b6829556d737a8510353c8f90e3f866" exitCode=2 Dec 05 15:00:39 crc kubenswrapper[4858]: I1205 15:00:39.693018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerDied","Data":"f751d4ff62041ede6966fcbc96230a1c1b6829556d737a8510353c8f90e3f866"} Dec 05 15:00:40 crc kubenswrapper[4858]: I1205 15:00:40.704369 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerID="053ccc0ce15b1f852702a00d8370428388c09c958fe134cbdc3179030692f365" exitCode=0 Dec 05 15:00:40 crc kubenswrapper[4858]: I1205 15:00:40.705481 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerDied","Data":"053ccc0ce15b1f852702a00d8370428388c09c958fe134cbdc3179030692f365"} Dec 05 15:00:40 crc kubenswrapper[4858]: I1205 15:00:40.705903 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerDied","Data":"bbef305c73922336c39bc4a6af66b38c55611fca825f65d600f338e1b67a82d5"} Dec 05 15:00:40 crc kubenswrapper[4858]: I1205 15:00:40.705990 4858 scope.go:117] "RemoveContainer" containerID="6bee8fb279de218cea32c6d04cd6b0cb46d74c41e4453011ed87d6b58ee12166" Dec 05 15:00:40 crc kubenswrapper[4858]: I1205 15:00:40.705495 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerID="bbef305c73922336c39bc4a6af66b38c55611fca825f65d600f338e1b67a82d5" exitCode=0 Dec 05 15:00:41 crc kubenswrapper[4858]: I1205 15:00:41.082929 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 15:00:41 crc kubenswrapper[4858]: I1205 15:00:41.170016 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="eaf87b37-d86c-4788-9768-2b3abf22f309" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 15:00:41 crc kubenswrapper[4858]: I1205 15:00:41.170148 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 15:00:41 crc kubenswrapper[4858]: I1205 15:00:41.171523 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c70f033307a7acc48406cec5a46e0d47d1b962b438e8904299dd296e5ca8b9fd"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Dec 05 15:00:41 crc kubenswrapper[4858]: I1205 15:00:41.171600 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eaf87b37-d86c-4788-9768-2b3abf22f309" containerName="cinder-scheduler" containerID="cri-o://c70f033307a7acc48406cec5a46e0d47d1b962b438e8904299dd296e5ca8b9fd" gracePeriod=30 Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.718997 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerDied","Data":"8eadbbd2abb1905af14eb90a333add42d0e3bd86326e6a86fbf70df4b23b02d3"} Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.719007 4858 generic.go:334] "Generic (PLEG): container finished" podID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerID="8eadbbd2abb1905af14eb90a333add42d0e3bd86326e6a86fbf70df4b23b02d3" exitCode=0 Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.856275 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.948862 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4bmzv" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986436 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-ceilometer-tls-certs\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986489 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-sg-core-conf-yaml\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986598 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-log-httpd\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986718 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-config-data\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986755 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-scripts\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986783 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-run-httpd\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986805 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-combined-ca-bundle\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:41.986866 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nlp7\" (UniqueName: \"kubernetes.io/projected/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-kube-api-access-7nlp7\") pod \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\" (UID: \"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba\") " Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.013738 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.015225 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.022412 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-kube-api-access-7nlp7" (OuterVolumeSpecName: "kube-api-access-7nlp7") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "kube-api-access-7nlp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.024608 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-scripts" (OuterVolumeSpecName: "scripts") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.063041 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.088082 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.089584 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.089597 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.089606 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.089616 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.089645 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.089654 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nlp7\" (UniqueName: \"kubernetes.io/projected/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-kube-api-access-7nlp7\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.109213 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.131926 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-config-data" (OuterVolumeSpecName: "config-data") pod "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" (UID: "cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.192165 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.192212 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.730318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba","Type":"ContainerDied","Data":"ad16ddbdf490d16d5c7e577b39bbbbf9ed69f50c4ba02592faab7bfab7d89859"} Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.730532 4858 scope.go:117] "RemoveContainer" containerID="053ccc0ce15b1f852702a00d8370428388c09c958fe134cbdc3179030692f365" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.730392 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.760637 4858 scope.go:117] "RemoveContainer" containerID="bbef305c73922336c39bc4a6af66b38c55611fca825f65d600f338e1b67a82d5" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.781324 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.785838 4858 scope.go:117] "RemoveContainer" containerID="f751d4ff62041ede6966fcbc96230a1c1b6829556d737a8510353c8f90e3f866" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.796306 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.811030 4858 scope.go:117] "RemoveContainer" containerID="8eadbbd2abb1905af14eb90a333add42d0e3bd86326e6a86fbf70df4b23b02d3" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.824584 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825257 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="sg-core" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825431 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="sg-core" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825449 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825456 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825471 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-notification-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825478 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-notification-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825497 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="extract-content" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825503 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="extract-content" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825521 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825527 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825535 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="registry-server" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825541 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="registry-server" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825553 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" containerName="collect-profiles" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825559 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" containerName="collect-profiles" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825576 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="extract-utilities" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825582 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="extract-utilities" Dec 05 15:00:42 crc kubenswrapper[4858]: E1205 15:00:42.825593 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="proxy-httpd" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.825598 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="proxy-httpd" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.830973 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="proxy-httpd" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.831012 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3091ef09-cd3a-47f9-bd2e-564f73bb4a4a" containerName="registry-server" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.831038 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.831050 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="sg-core" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.831059 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-central-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.831066 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" containerName="ceilometer-notification-agent" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.831079 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" containerName="collect-profiles" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.833964 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.837189 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.838061 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.839688 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.899911 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905121 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-log-httpd\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905187 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp575\" (UniqueName: \"kubernetes.io/projected/af869d64-c165-44c6-8c2f-4c90997e7180-kube-api-access-xp575\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905205 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905222 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905541 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-config-data\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905633 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-run-httpd\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905680 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-scripts\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:42 crc kubenswrapper[4858]: I1205 15:00:42.905775 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007278 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-config-data\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007339 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-run-httpd\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-scripts\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007407 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007467 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-log-httpd\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007609 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp575\" (UniqueName: \"kubernetes.io/projected/af869d64-c165-44c6-8c2f-4c90997e7180-kube-api-access-xp575\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007635 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.007661 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.009210 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-log-httpd\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.009663 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-run-httpd\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.014046 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.014766 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.016072 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-scripts\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.016611 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-config-data\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.018096 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.030056 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp575\" (UniqueName: \"kubernetes.io/projected/af869d64-c165-44c6-8c2f-4c90997e7180-kube-api-access-xp575\") pod \"ceilometer-0\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.076747 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.077853 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.149804 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.224397 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.870994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 15:00:43 crc kubenswrapper[4858]: I1205 15:00:43.918342 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba" path="/var/lib/kubelet/pods/cdb5a7f0-22c2-43a9-86f2-0c70c966c6ba/volumes" Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.115968 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:44 crc kubenswrapper[4858]: W1205 15:00:44.181019 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf869d64_c165_44c6_8c2f_4c90997e7180.slice/crio-09436c050484588bf1dfc613b8017125bbec027b86dbb5c9246b0384ec84bf61 WatchSource:0}: Error finding container 09436c050484588bf1dfc613b8017125bbec027b86dbb5c9246b0384ec84bf61: Status 404 returned error can't find the container with id 09436c050484588bf1dfc613b8017125bbec027b86dbb5c9246b0384ec84bf61 Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.761419 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.761668 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.761801 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.784095 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerStarted","Data":"c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9"} Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.784139 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerStarted","Data":"09436c050484588bf1dfc613b8017125bbec027b86dbb5c9246b0384ec84bf61"} Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.795891 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:00:44 crc kubenswrapper[4858]: I1205 15:00:44.800911 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" gracePeriod=600 Dec 05 15:00:44 crc kubenswrapper[4858]: E1205 15:00:44.927091 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.794795 4858 generic.go:334] "Generic (PLEG): container finished" podID="eaf87b37-d86c-4788-9768-2b3abf22f309" containerID="c70f033307a7acc48406cec5a46e0d47d1b962b438e8904299dd296e5ca8b9fd" exitCode=0 Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.795063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf87b37-d86c-4788-9768-2b3abf22f309","Type":"ContainerDied","Data":"c70f033307a7acc48406cec5a46e0d47d1b962b438e8904299dd296e5ca8b9fd"} Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.802371 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" exitCode=0 Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.802425 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003"} Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.802477 4858 scope.go:117] "RemoveContainer" containerID="94feaac31b8084a4c9c8b1f276d2f86b32f1ae29a3dc586cf0bbd4c277523660" Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.803735 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:00:45 crc kubenswrapper[4858]: E1205 15:00:45.804367 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.806005 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerStarted","Data":"2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb"} Dec 05 15:00:45 crc kubenswrapper[4858]: I1205 15:00:45.806041 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerStarted","Data":"86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0"} Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.463809 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" containerID="cri-o://628c28a71c96308f3626201d8d7aee781a0c8fa9fa268e3c311e5b9ebf668ae9" gracePeriod=15 Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.478422 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.826801 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eaf87b37-d86c-4788-9768-2b3abf22f309","Type":"ContainerStarted","Data":"f77bf9aa3f6f1056efd445c0e75653ff9c364a19415dee33cb806b511292292b"} Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.832272 4858 generic.go:334] "Generic (PLEG): container finished" podID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerID="628c28a71c96308f3626201d8d7aee781a0c8fa9fa268e3c311e5b9ebf668ae9" exitCode=0 Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.832342 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" event={"ID":"e81e683d-b55e-4076-b333-4e68d8caed3c","Type":"ContainerDied","Data":"628c28a71c96308f3626201d8d7aee781a0c8fa9fa268e3c311e5b9ebf668ae9"} Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.835435 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerStarted","Data":"dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849"} Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.835532 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 15:00:47 crc kubenswrapper[4858]: I1205 15:00:47.876742 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.122175736 podStartE2EDuration="5.876483624s" podCreationTimestamp="2025-12-05 15:00:42 +0000 UTC" firstStartedPulling="2025-12-05 15:00:44.183269703 +0000 UTC m=+3852.730867842" lastFinishedPulling="2025-12-05 15:00:46.937577591 +0000 UTC m=+3855.485175730" observedRunningTime="2025-12-05 15:00:47.875366504 +0000 UTC m=+3856.422964643" watchObservedRunningTime="2025-12-05 15:00:47.876483624 +0000 UTC m=+3856.424081763" Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.847485 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-central-agent" containerID="cri-o://c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9" gracePeriod=30 Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.847907 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" event={"ID":"e81e683d-b55e-4076-b333-4e68d8caed3c","Type":"ContainerStarted","Data":"259c04f4b9194dbed65fba7481c634296da7ab58f729fbb030165d575a6e3ff1"} Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.847937 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="sg-core" containerID="cri-o://2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb" gracePeriod=30 Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.848088 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="proxy-httpd" containerID="cri-o://dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849" gracePeriod=30 Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.848333 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-notification-agent" containerID="cri-o://86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0" gracePeriod=30 Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.848875 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 15:00:48 crc kubenswrapper[4858]: I1205 15:00:48.987136 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.860315 4858 generic.go:334] "Generic (PLEG): container finished" podID="521a1948-1758-4148-be85-f3d91f04aac9" containerID="714fab6a4b4ed795f4c07ad114c7088986813b0085cdbf2109f32a7e1c39a10a" exitCode=137 Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.860613 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerDied","Data":"714fab6a4b4ed795f4c07ad114c7088986813b0085cdbf2109f32a7e1c39a10a"} Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.863226 4858 generic.go:334] "Generic (PLEG): container finished" podID="af869d64-c165-44c6-8c2f-4c90997e7180" containerID="dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849" exitCode=0 Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.863253 4858 generic.go:334] "Generic (PLEG): container finished" podID="af869d64-c165-44c6-8c2f-4c90997e7180" containerID="2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb" exitCode=2 Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.863264 4858 generic.go:334] "Generic (PLEG): container finished" podID="af869d64-c165-44c6-8c2f-4c90997e7180" containerID="86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0" exitCode=0 Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.864399 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerDied","Data":"dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849"} Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.864433 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerDied","Data":"2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb"} Dec 05 15:00:49 crc kubenswrapper[4858]: I1205 15:00:49.864447 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerDied","Data":"86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0"} Dec 05 15:00:50 crc kubenswrapper[4858]: I1205 15:00:50.124975 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 15:00:50 crc kubenswrapper[4858]: I1205 15:00:50.875026 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"d5fe2c29ee8b9072f88e7c5a1e74b4bd5e9b3e5555d44c3ca60e280592e82d66"} Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.809252 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.850902 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-scripts\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.850955 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-run-httpd\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.851042 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-config-data\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.851118 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-combined-ca-bundle\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.851220 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-sg-core-conf-yaml\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.851261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-ceilometer-tls-certs\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.851381 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp575\" (UniqueName: \"kubernetes.io/projected/af869d64-c165-44c6-8c2f-4c90997e7180-kube-api-access-xp575\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.851427 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-log-httpd\") pod \"af869d64-c165-44c6-8c2f-4c90997e7180\" (UID: \"af869d64-c165-44c6-8c2f-4c90997e7180\") " Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.854203 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.855221 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.868708 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-scripts" (OuterVolumeSpecName: "scripts") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.870133 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af869d64-c165-44c6-8c2f-4c90997e7180-kube-api-access-xp575" (OuterVolumeSpecName: "kube-api-access-xp575") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "kube-api-access-xp575". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.924415 4858 generic.go:334] "Generic (PLEG): container finished" podID="af869d64-c165-44c6-8c2f-4c90997e7180" containerID="c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9" exitCode=0 Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.924474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerDied","Data":"c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9"} Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.924706 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.924902 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"af869d64-c165-44c6-8c2f-4c90997e7180","Type":"ContainerDied","Data":"09436c050484588bf1dfc613b8017125bbec027b86dbb5c9246b0384ec84bf61"} Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.924935 4858 scope.go:117] "RemoveContainer" containerID="dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.938014 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.954421 4858 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.954448 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp575\" (UniqueName: \"kubernetes.io/projected/af869d64-c165-44c6-8c2f-4c90997e7180-kube-api-access-xp575\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.954463 4858 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.954472 4858 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.954483 4858 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/af869d64-c165-44c6-8c2f-4c90997e7180-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.963591 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:52 crc kubenswrapper[4858]: I1205 15:00:52.981404 4858 scope.go:117] "RemoveContainer" containerID="2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.008440 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-config-data" (OuterVolumeSpecName: "config-data") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.011938 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af869d64-c165-44c6-8c2f-4c90997e7180" (UID: "af869d64-c165-44c6-8c2f-4c90997e7180"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.013655 4858 scope.go:117] "RemoveContainer" containerID="86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.035565 4858 scope.go:117] "RemoveContainer" containerID="c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.056169 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.056203 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.056214 4858 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/af869d64-c165-44c6-8c2f-4c90997e7180-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.062191 4858 scope.go:117] "RemoveContainer" containerID="dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.062972 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849\": container with ID starting with dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849 not found: ID does not exist" containerID="dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.063027 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849"} err="failed to get container status \"dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849\": rpc error: code = NotFound desc = could not find container \"dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849\": container with ID starting with dfe8107541e38c188c871ca85b2e39ab49932ba9240293b1e308f255faed3849 not found: ID does not exist" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.063060 4858 scope.go:117] "RemoveContainer" containerID="2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.063537 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb\": container with ID starting with 2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb not found: ID does not exist" containerID="2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.063562 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb"} err="failed to get container status \"2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb\": rpc error: code = NotFound desc = could not find container \"2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb\": container with ID starting with 2b26dd28785e6edcf9b26714cb6dd0c3694b25859e79b524773b3a178520cbfb not found: ID does not exist" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.063575 4858 scope.go:117] "RemoveContainer" containerID="86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.063766 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0\": container with ID starting with 86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0 not found: ID does not exist" containerID="86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.063796 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0"} err="failed to get container status \"86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0\": rpc error: code = NotFound desc = could not find container \"86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0\": container with ID starting with 86b6b82cd0a087bff109747d40e1cbbd86d61563c2224106bbc4abc7128f3ac0 not found: ID does not exist" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.063810 4858 scope.go:117] "RemoveContainer" containerID="c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.064064 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9\": container with ID starting with c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9 not found: ID does not exist" containerID="c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.064091 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9"} err="failed to get container status \"c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9\": rpc error: code = NotFound desc = could not find container \"c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9\": container with ID starting with c7b5c9173f0cb48f8c9772061a2ad867c20847d05db428489cada552ad23e3b9 not found: ID does not exist" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.262753 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.288710 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.302493 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.302937 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-central-agent" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.302956 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-central-agent" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.302970 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-notification-agent" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.302977 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-notification-agent" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.302999 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="sg-core" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.303005 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="sg-core" Dec 05 15:00:53 crc kubenswrapper[4858]: E1205 15:00:53.303020 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="proxy-httpd" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.303026 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="proxy-httpd" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.303206 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="sg-core" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.303219 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-central-agent" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.303228 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="proxy-httpd" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.303240 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" containerName="ceilometer-notification-agent" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.306068 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.310655 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.310767 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.310672 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.335660 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.364883 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.364929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.365000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41ac7a05-cdcc-49c3-b134-8db7753f2757-log-httpd\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.365038 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41ac7a05-cdcc-49c3-b134-8db7753f2757-run-httpd\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.365115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-config-data\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.365130 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.365284 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-scripts\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.365339 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6bl\" (UniqueName: \"kubernetes.io/projected/41ac7a05-cdcc-49c3-b134-8db7753f2757-kube-api-access-kr6bl\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466649 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-config-data\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466714 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466782 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-scripts\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466810 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6bl\" (UniqueName: \"kubernetes.io/projected/41ac7a05-cdcc-49c3-b134-8db7753f2757-kube-api-access-kr6bl\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466899 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466916 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466954 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41ac7a05-cdcc-49c3-b134-8db7753f2757-log-httpd\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.466988 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41ac7a05-cdcc-49c3-b134-8db7753f2757-run-httpd\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.467631 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41ac7a05-cdcc-49c3-b134-8db7753f2757-run-httpd\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.468008 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41ac7a05-cdcc-49c3-b134-8db7753f2757-log-httpd\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.471779 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-config-data\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.472192 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.472741 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.473656 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-scripts\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.482568 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/41ac7a05-cdcc-49c3-b134-8db7753f2757-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.486800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6bl\" (UniqueName: \"kubernetes.io/projected/41ac7a05-cdcc-49c3-b134-8db7753f2757-kube-api-access-kr6bl\") pod \"ceilometer-0\" (UID: \"41ac7a05-cdcc-49c3-b134-8db7753f2757\") " pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.663006 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 15:00:53 crc kubenswrapper[4858]: I1205 15:00:53.912273 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af869d64-c165-44c6-8c2f-4c90997e7180" path="/var/lib/kubelet/pods/af869d64-c165-44c6-8c2f-4c90997e7180/volumes" Dec 05 15:00:54 crc kubenswrapper[4858]: I1205 15:00:54.165575 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 15:00:54 crc kubenswrapper[4858]: W1205 15:00:54.171445 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ac7a05_cdcc_49c3_b134_8db7753f2757.slice/crio-1f4e1d83792c7207038334aab1f08a52fce2568de4e8fe4d7cdcfae510d4ac3c WatchSource:0}: Error finding container 1f4e1d83792c7207038334aab1f08a52fce2568de4e8fe4d7cdcfae510d4ac3c: Status 404 returned error can't find the container with id 1f4e1d83792c7207038334aab1f08a52fce2568de4e8fe4d7cdcfae510d4ac3c Dec 05 15:00:54 crc kubenswrapper[4858]: I1205 15:00:54.943332 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41ac7a05-cdcc-49c3-b134-8db7753f2757","Type":"ContainerStarted","Data":"1f4e1d83792c7207038334aab1f08a52fce2568de4e8fe4d7cdcfae510d4ac3c"} Dec 05 15:00:55 crc kubenswrapper[4858]: I1205 15:00:55.158042 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 15:00:55 crc kubenswrapper[4858]: I1205 15:00:55.977309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41ac7a05-cdcc-49c3-b134-8db7753f2757","Type":"ContainerStarted","Data":"18960f48375a2012e69e67c36cf3defdb2fe30d72659a81112ff417711f7ed8a"} Dec 05 15:00:55 crc kubenswrapper[4858]: I1205 15:00:55.977860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41ac7a05-cdcc-49c3-b134-8db7753f2757","Type":"ContainerStarted","Data":"f4669f05b50c7d50c52dc618f992802638b22cdd52599d81eaaa4c0ebf09a955"} Dec 05 15:00:57 crc kubenswrapper[4858]: I1205 15:00:56.999937 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41ac7a05-cdcc-49c3-b134-8db7753f2757","Type":"ContainerStarted","Data":"cffa6ef925f37155d53e7743707a416b3e5db3e02bd0a61a01d081fcad7933b5"} Dec 05 15:00:58 crc kubenswrapper[4858]: I1205 15:00:58.011814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41ac7a05-cdcc-49c3-b134-8db7753f2757","Type":"ContainerStarted","Data":"15a714d9d18a6c7f27ce7810773d3b34b52a3a27169bc127ece4a00dc0befdf4"} Dec 05 15:00:58 crc kubenswrapper[4858]: I1205 15:00:58.013162 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 15:00:58 crc kubenswrapper[4858]: I1205 15:00:58.038858 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6171021620000001 podStartE2EDuration="5.038837199s" podCreationTimestamp="2025-12-05 15:00:53 +0000 UTC" firstStartedPulling="2025-12-05 15:00:54.1752443 +0000 UTC m=+3862.722842439" lastFinishedPulling="2025-12-05 15:00:57.596979337 +0000 UTC m=+3866.144577476" observedRunningTime="2025-12-05 15:00:58.030512794 +0000 UTC m=+3866.578110933" watchObservedRunningTime="2025-12-05 15:00:58.038837199 +0000 UTC m=+3866.586435338" Dec 05 15:00:59 crc kubenswrapper[4858]: I1205 15:00:59.900143 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:00:59 crc kubenswrapper[4858]: E1205 15:00:59.900676 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.201577 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29415781-ndwbd"] Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.203480 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.213090 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415781-ndwbd"] Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.363625 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcq9\" (UniqueName: \"kubernetes.io/projected/8658ca73-911c-47b5-9606-8c06b4380dd6-kube-api-access-4fcq9\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.363688 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-combined-ca-bundle\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.364040 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-config-data\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.364197 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-fernet-keys\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.466565 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcq9\" (UniqueName: \"kubernetes.io/projected/8658ca73-911c-47b5-9606-8c06b4380dd6-kube-api-access-4fcq9\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.466632 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-combined-ca-bundle\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.466715 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-config-data\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.466756 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-fernet-keys\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.479181 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-config-data\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.479878 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-combined-ca-bundle\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.480570 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-fernet-keys\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.489277 4858 scope.go:117] "RemoveContainer" containerID="b946863cbe80dada0fde3fb478d5d5df9bae80ae7d13100ee9c4fd0913141e58" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.491137 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcq9\" (UniqueName: \"kubernetes.io/projected/8658ca73-911c-47b5-9606-8c06b4380dd6-kube-api-access-4fcq9\") pod \"keystone-cron-29415781-ndwbd\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:00 crc kubenswrapper[4858]: I1205 15:01:00.534638 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:01 crc kubenswrapper[4858]: I1205 15:01:01.116433 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415781-ndwbd"] Dec 05 15:01:01 crc kubenswrapper[4858]: W1205 15:01:01.118086 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8658ca73_911c_47b5_9606_8c06b4380dd6.slice/crio-ed444a0265da7dcbcbbf501c1956e2bfb4f7903376f95694c2f983084f040d0e WatchSource:0}: Error finding container ed444a0265da7dcbcbbf501c1956e2bfb4f7903376f95694c2f983084f040d0e: Status 404 returned error can't find the container with id ed444a0265da7dcbcbbf501c1956e2bfb4f7903376f95694c2f983084f040d0e Dec 05 15:01:02 crc kubenswrapper[4858]: I1205 15:01:02.049058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415781-ndwbd" event={"ID":"8658ca73-911c-47b5-9606-8c06b4380dd6","Type":"ContainerStarted","Data":"7566cb6791fef7703f82ee6aa9db5779cfdf4e9bb13fd5ec663973ce69038686"} Dec 05 15:01:02 crc kubenswrapper[4858]: I1205 15:01:02.049367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415781-ndwbd" event={"ID":"8658ca73-911c-47b5-9606-8c06b4380dd6","Type":"ContainerStarted","Data":"ed444a0265da7dcbcbbf501c1956e2bfb4f7903376f95694c2f983084f040d0e"} Dec 05 15:01:02 crc kubenswrapper[4858]: I1205 15:01:02.069106 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29415781-ndwbd" podStartSLOduration=2.069088915 podStartE2EDuration="2.069088915s" podCreationTimestamp="2025-12-05 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 15:01:02.064260174 +0000 UTC m=+3870.611858313" watchObservedRunningTime="2025-12-05 15:01:02.069088915 +0000 UTC m=+3870.616687044" Dec 05 15:01:06 crc kubenswrapper[4858]: I1205 15:01:06.079759 4858 generic.go:334] "Generic (PLEG): container finished" podID="8658ca73-911c-47b5-9606-8c06b4380dd6" containerID="7566cb6791fef7703f82ee6aa9db5779cfdf4e9bb13fd5ec663973ce69038686" exitCode=0 Dec 05 15:01:06 crc kubenswrapper[4858]: I1205 15:01:06.079970 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415781-ndwbd" event={"ID":"8658ca73-911c-47b5-9606-8c06b4380dd6","Type":"ContainerDied","Data":"7566cb6791fef7703f82ee6aa9db5779cfdf4e9bb13fd5ec663973ce69038686"} Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.587806 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.681258 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-combined-ca-bundle\") pod \"8658ca73-911c-47b5-9606-8c06b4380dd6\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.681322 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-config-data\") pod \"8658ca73-911c-47b5-9606-8c06b4380dd6\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.681368 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-fernet-keys\") pod \"8658ca73-911c-47b5-9606-8c06b4380dd6\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.681444 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fcq9\" (UniqueName: \"kubernetes.io/projected/8658ca73-911c-47b5-9606-8c06b4380dd6-kube-api-access-4fcq9\") pod \"8658ca73-911c-47b5-9606-8c06b4380dd6\" (UID: \"8658ca73-911c-47b5-9606-8c06b4380dd6\") " Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.691593 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8658ca73-911c-47b5-9606-8c06b4380dd6-kube-api-access-4fcq9" (OuterVolumeSpecName: "kube-api-access-4fcq9") pod "8658ca73-911c-47b5-9606-8c06b4380dd6" (UID: "8658ca73-911c-47b5-9606-8c06b4380dd6"). InnerVolumeSpecName "kube-api-access-4fcq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.691660 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8658ca73-911c-47b5-9606-8c06b4380dd6" (UID: "8658ca73-911c-47b5-9606-8c06b4380dd6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.717969 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8658ca73-911c-47b5-9606-8c06b4380dd6" (UID: "8658ca73-911c-47b5-9606-8c06b4380dd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.780984 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-config-data" (OuterVolumeSpecName: "config-data") pod "8658ca73-911c-47b5-9606-8c06b4380dd6" (UID: "8658ca73-911c-47b5-9606-8c06b4380dd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.784995 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.785086 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.785207 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fcq9\" (UniqueName: \"kubernetes.io/projected/8658ca73-911c-47b5-9606-8c06b4380dd6-kube-api-access-4fcq9\") on node \"crc\" DevicePath \"\"" Dec 05 15:01:07 crc kubenswrapper[4858]: I1205 15:01:07.785277 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8658ca73-911c-47b5-9606-8c06b4380dd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 15:01:08 crc kubenswrapper[4858]: I1205 15:01:08.095544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415781-ndwbd" event={"ID":"8658ca73-911c-47b5-9606-8c06b4380dd6","Type":"ContainerDied","Data":"ed444a0265da7dcbcbbf501c1956e2bfb4f7903376f95694c2f983084f040d0e"} Dec 05 15:01:08 crc kubenswrapper[4858]: I1205 15:01:08.095586 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed444a0265da7dcbcbbf501c1956e2bfb4f7903376f95694c2f983084f040d0e" Dec 05 15:01:08 crc kubenswrapper[4858]: I1205 15:01:08.095785 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415781-ndwbd" Dec 05 15:01:10 crc kubenswrapper[4858]: I1205 15:01:10.899423 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:01:10 crc kubenswrapper[4858]: E1205 15:01:10.900090 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:01:19 crc kubenswrapper[4858]: I1205 15:01:19.205886 4858 generic.go:334] "Generic (PLEG): container finished" podID="521a1948-1758-4148-be85-f3d91f04aac9" containerID="d760f1907015344ed2e0efca3663bcf05625742bc6123f022ebcd1dbf3de9ef2" exitCode=1 Dec 05 15:01:19 crc kubenswrapper[4858]: I1205 15:01:19.205980 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerDied","Data":"d760f1907015344ed2e0efca3663bcf05625742bc6123f022ebcd1dbf3de9ef2"} Dec 05 15:01:19 crc kubenswrapper[4858]: I1205 15:01:19.207647 4858 scope.go:117] "RemoveContainer" containerID="d760f1907015344ed2e0efca3663bcf05625742bc6123f022ebcd1dbf3de9ef2" Dec 05 15:01:20 crc kubenswrapper[4858]: I1205 15:01:20.217956 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" event={"ID":"521a1948-1758-4148-be85-f3d91f04aac9","Type":"ContainerStarted","Data":"028d54e7ce26da395589d898d079c3a0dcc04e25d0cbf2886e98cba3619aad7b"} Dec 05 15:01:23 crc kubenswrapper[4858]: I1205 15:01:23.695865 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 15:01:25 crc kubenswrapper[4858]: I1205 15:01:25.899178 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:01:25 crc kubenswrapper[4858]: E1205 15:01:25.900027 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:01:37 crc kubenswrapper[4858]: I1205 15:01:37.899755 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:01:37 crc kubenswrapper[4858]: E1205 15:01:37.900754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:01:52 crc kubenswrapper[4858]: I1205 15:01:52.899730 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:01:52 crc kubenswrapper[4858]: E1205 15:01:52.900550 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:02:00 crc kubenswrapper[4858]: I1205 15:02:00.922619 4858 scope.go:117] "RemoveContainer" containerID="e38c262b42469b3164cbc0f3b3bf6a47d1a39f624fd084aaa4c09d7146beeed7" Dec 05 15:02:00 crc kubenswrapper[4858]: I1205 15:02:00.967001 4858 scope.go:117] "RemoveContainer" containerID="cfcd51060e3e3de5341228c5bb6ddefb357fa57b99e62adc7a58281784a4e1f1" Dec 05 15:02:01 crc kubenswrapper[4858]: I1205 15:02:01.006307 4858 scope.go:117] "RemoveContainer" containerID="3d0b81552b9c7adb7801248775f0a3fe2215b8ba0138a5015c22bb5e07f41c44" Dec 05 15:02:04 crc kubenswrapper[4858]: I1205 15:02:04.899523 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:02:04 crc kubenswrapper[4858]: E1205 15:02:04.900447 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:02:16 crc kubenswrapper[4858]: I1205 15:02:16.899232 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:02:16 crc kubenswrapper[4858]: E1205 15:02:16.900052 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:02:30 crc kubenswrapper[4858]: I1205 15:02:30.899253 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:02:30 crc kubenswrapper[4858]: E1205 15:02:30.900475 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:02:44 crc kubenswrapper[4858]: I1205 15:02:44.899853 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:02:44 crc kubenswrapper[4858]: E1205 15:02:44.900583 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:02:57 crc kubenswrapper[4858]: I1205 15:02:57.899741 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:02:57 crc kubenswrapper[4858]: E1205 15:02:57.902382 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:03:12 crc kubenswrapper[4858]: I1205 15:03:12.899556 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:03:12 crc kubenswrapper[4858]: E1205 15:03:12.900272 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:03:27 crc kubenswrapper[4858]: I1205 15:03:27.899343 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:03:27 crc kubenswrapper[4858]: E1205 15:03:27.900972 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:03:40 crc kubenswrapper[4858]: I1205 15:03:40.899742 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:03:40 crc kubenswrapper[4858]: E1205 15:03:40.901369 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:03:52 crc kubenswrapper[4858]: I1205 15:03:52.901000 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:03:52 crc kubenswrapper[4858]: E1205 15:03:52.901722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:04:07 crc kubenswrapper[4858]: I1205 15:04:07.904121 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:04:07 crc kubenswrapper[4858]: E1205 15:04:07.905109 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:04:22 crc kubenswrapper[4858]: I1205 15:04:22.899501 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:04:22 crc kubenswrapper[4858]: E1205 15:04:22.900343 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:04:33 crc kubenswrapper[4858]: I1205 15:04:33.899433 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:04:33 crc kubenswrapper[4858]: E1205 15:04:33.900297 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.284381 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ww2mn"] Dec 05 15:04:42 crc kubenswrapper[4858]: E1205 15:04:42.287548 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8658ca73-911c-47b5-9606-8c06b4380dd6" containerName="keystone-cron" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.287580 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="8658ca73-911c-47b5-9606-8c06b4380dd6" containerName="keystone-cron" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.288405 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="8658ca73-911c-47b5-9606-8c06b4380dd6" containerName="keystone-cron" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.292568 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.448253 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-utilities\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.449023 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-catalog-content\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.449268 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86fcm\" (UniqueName: \"kubernetes.io/projected/ac65ea89-21ce-4519-aee8-9290192446b9-kube-api-access-86fcm\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.551083 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86fcm\" (UniqueName: \"kubernetes.io/projected/ac65ea89-21ce-4519-aee8-9290192446b9-kube-api-access-86fcm\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.551163 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-utilities\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.551223 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-catalog-content\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.553101 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-utilities\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.553281 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-catalog-content\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.591362 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86fcm\" (UniqueName: \"kubernetes.io/projected/ac65ea89-21ce-4519-aee8-9290192446b9-kube-api-access-86fcm\") pod \"redhat-marketplace-ww2mn\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.594587 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww2mn"] Dec 05 15:04:42 crc kubenswrapper[4858]: I1205 15:04:42.617494 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:43 crc kubenswrapper[4858]: I1205 15:04:43.394807 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww2mn"] Dec 05 15:04:44 crc kubenswrapper[4858]: I1205 15:04:44.057280 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerDied","Data":"d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25"} Dec 05 15:04:44 crc kubenswrapper[4858]: I1205 15:04:44.057201 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac65ea89-21ce-4519-aee8-9290192446b9" containerID="d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25" exitCode=0 Dec 05 15:04:44 crc kubenswrapper[4858]: I1205 15:04:44.057993 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerStarted","Data":"6c57036f5e42c5c2eac40f8b2b8cf9b43835bc061baebba41a11162d3a089f83"} Dec 05 15:04:44 crc kubenswrapper[4858]: I1205 15:04:44.899131 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:04:44 crc kubenswrapper[4858]: E1205 15:04:44.900467 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:04:45 crc kubenswrapper[4858]: I1205 15:04:45.068550 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerStarted","Data":"52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088"} Dec 05 15:04:46 crc kubenswrapper[4858]: I1205 15:04:46.079737 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac65ea89-21ce-4519-aee8-9290192446b9" containerID="52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088" exitCode=0 Dec 05 15:04:46 crc kubenswrapper[4858]: I1205 15:04:46.079786 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerDied","Data":"52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088"} Dec 05 15:04:47 crc kubenswrapper[4858]: I1205 15:04:47.090634 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerStarted","Data":"467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e"} Dec 05 15:04:47 crc kubenswrapper[4858]: I1205 15:04:47.116792 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ww2mn" podStartSLOduration=2.705527453 podStartE2EDuration="5.116365002s" podCreationTimestamp="2025-12-05 15:04:42 +0000 UTC" firstStartedPulling="2025-12-05 15:04:44.059150042 +0000 UTC m=+4092.606748181" lastFinishedPulling="2025-12-05 15:04:46.469987581 +0000 UTC m=+4095.017585730" observedRunningTime="2025-12-05 15:04:47.110379881 +0000 UTC m=+4095.657978020" watchObservedRunningTime="2025-12-05 15:04:47.116365002 +0000 UTC m=+4095.663963141" Dec 05 15:04:52 crc kubenswrapper[4858]: I1205 15:04:52.618287 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:52 crc kubenswrapper[4858]: I1205 15:04:52.618595 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:52 crc kubenswrapper[4858]: I1205 15:04:52.674504 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:53 crc kubenswrapper[4858]: I1205 15:04:53.186273 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:53 crc kubenswrapper[4858]: I1205 15:04:53.237155 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww2mn"] Dec 05 15:04:55 crc kubenswrapper[4858]: I1205 15:04:55.153510 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ww2mn" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="registry-server" containerID="cri-o://467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e" gracePeriod=2 Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.027033 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.117274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-catalog-content\") pod \"ac65ea89-21ce-4519-aee8-9290192446b9\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.117352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86fcm\" (UniqueName: \"kubernetes.io/projected/ac65ea89-21ce-4519-aee8-9290192446b9-kube-api-access-86fcm\") pod \"ac65ea89-21ce-4519-aee8-9290192446b9\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.117508 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-utilities\") pod \"ac65ea89-21ce-4519-aee8-9290192446b9\" (UID: \"ac65ea89-21ce-4519-aee8-9290192446b9\") " Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.119619 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-utilities" (OuterVolumeSpecName: "utilities") pod "ac65ea89-21ce-4519-aee8-9290192446b9" (UID: "ac65ea89-21ce-4519-aee8-9290192446b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.131647 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac65ea89-21ce-4519-aee8-9290192446b9-kube-api-access-86fcm" (OuterVolumeSpecName: "kube-api-access-86fcm") pod "ac65ea89-21ce-4519-aee8-9290192446b9" (UID: "ac65ea89-21ce-4519-aee8-9290192446b9"). InnerVolumeSpecName "kube-api-access-86fcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.142975 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac65ea89-21ce-4519-aee8-9290192446b9" (UID: "ac65ea89-21ce-4519-aee8-9290192446b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.163496 4858 generic.go:334] "Generic (PLEG): container finished" podID="ac65ea89-21ce-4519-aee8-9290192446b9" containerID="467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e" exitCode=0 Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.163559 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerDied","Data":"467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e"} Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.163628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ww2mn" event={"ID":"ac65ea89-21ce-4519-aee8-9290192446b9","Type":"ContainerDied","Data":"6c57036f5e42c5c2eac40f8b2b8cf9b43835bc061baebba41a11162d3a089f83"} Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.163652 4858 scope.go:117] "RemoveContainer" containerID="467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.163578 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ww2mn" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.209046 4858 scope.go:117] "RemoveContainer" containerID="52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.220550 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.220589 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86fcm\" (UniqueName: \"kubernetes.io/projected/ac65ea89-21ce-4519-aee8-9290192446b9-kube-api-access-86fcm\") on node \"crc\" DevicePath \"\"" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.220602 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac65ea89-21ce-4519-aee8-9290192446b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.224648 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww2mn"] Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.234884 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ww2mn"] Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.240601 4858 scope.go:117] "RemoveContainer" containerID="d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.278127 4858 scope.go:117] "RemoveContainer" containerID="467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e" Dec 05 15:04:56 crc kubenswrapper[4858]: E1205 15:04:56.280248 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e\": container with ID starting with 467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e not found: ID does not exist" containerID="467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.280481 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e"} err="failed to get container status \"467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e\": rpc error: code = NotFound desc = could not find container \"467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e\": container with ID starting with 467b1cc8e6d17074ab6c9ddd778cf77b250c928481eef1c0c9e379b0da49f29e not found: ID does not exist" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.280517 4858 scope.go:117] "RemoveContainer" containerID="52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088" Dec 05 15:04:56 crc kubenswrapper[4858]: E1205 15:04:56.281131 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088\": container with ID starting with 52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088 not found: ID does not exist" containerID="52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.281184 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088"} err="failed to get container status \"52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088\": rpc error: code = NotFound desc = could not find container \"52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088\": container with ID starting with 52570e9c5e0a1156988897e90605c0219cb4c41e5d8eb3b23f39f9631b95b088 not found: ID does not exist" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.281216 4858 scope.go:117] "RemoveContainer" containerID="d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25" Dec 05 15:04:56 crc kubenswrapper[4858]: E1205 15:04:56.281476 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25\": container with ID starting with d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25 not found: ID does not exist" containerID="d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25" Dec 05 15:04:56 crc kubenswrapper[4858]: I1205 15:04:56.281498 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25"} err="failed to get container status \"d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25\": rpc error: code = NotFound desc = could not find container \"d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25\": container with ID starting with d8b4ec1c17c9422244f94b9230db3f2e78c35c4a85569f7a3a37492065365c25 not found: ID does not exist" Dec 05 15:04:57 crc kubenswrapper[4858]: I1205 15:04:57.899462 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:04:57 crc kubenswrapper[4858]: E1205 15:04:57.900467 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:04:57 crc kubenswrapper[4858]: I1205 15:04:57.909244 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" path="/var/lib/kubelet/pods/ac65ea89-21ce-4519-aee8-9290192446b9/volumes" Dec 05 15:05:10 crc kubenswrapper[4858]: I1205 15:05:10.899477 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:05:10 crc kubenswrapper[4858]: E1205 15:05:10.900243 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:05:25 crc kubenswrapper[4858]: I1205 15:05:25.900146 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:05:25 crc kubenswrapper[4858]: E1205 15:05:25.903189 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:05:36 crc kubenswrapper[4858]: I1205 15:05:36.899683 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:05:36 crc kubenswrapper[4858]: E1205 15:05:36.900365 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:05:47 crc kubenswrapper[4858]: I1205 15:05:47.899054 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:05:48 crc kubenswrapper[4858]: I1205 15:05:48.631525 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"021c14b25ca6f3d8523eebe7dd3ab092a2189f3fba31c3edcbb7e7e6ad2db62a"} Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.584795 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjhd4"] Dec 05 15:06:02 crc kubenswrapper[4858]: E1205 15:06:02.588079 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="extract-utilities" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.588191 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="extract-utilities" Dec 05 15:06:02 crc kubenswrapper[4858]: E1205 15:06:02.588348 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="extract-content" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.588365 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="extract-content" Dec 05 15:06:02 crc kubenswrapper[4858]: E1205 15:06:02.588389 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="registry-server" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.588411 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="registry-server" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.589087 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac65ea89-21ce-4519-aee8-9290192446b9" containerName="registry-server" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.592374 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.660714 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjhd4"] Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.700207 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8b9n\" (UniqueName: \"kubernetes.io/projected/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-kube-api-access-c8b9n\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.700318 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-utilities\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.700345 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-catalog-content\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.802175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8b9n\" (UniqueName: \"kubernetes.io/projected/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-kube-api-access-c8b9n\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.802287 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-utilities\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.802311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-catalog-content\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.805178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-utilities\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.805480 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-catalog-content\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.825952 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8b9n\" (UniqueName: \"kubernetes.io/projected/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-kube-api-access-c8b9n\") pod \"community-operators-qjhd4\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:02 crc kubenswrapper[4858]: I1205 15:06:02.913197 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:03 crc kubenswrapper[4858]: I1205 15:06:03.687679 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjhd4"] Dec 05 15:06:03 crc kubenswrapper[4858]: I1205 15:06:03.801382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerStarted","Data":"cc4a84b37942181f63b1632ea9e0d5a4cb4404636ae789ed93c30d0c411f21eb"} Dec 05 15:06:04 crc kubenswrapper[4858]: I1205 15:06:04.814094 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerID="940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2" exitCode=0 Dec 05 15:06:04 crc kubenswrapper[4858]: I1205 15:06:04.815000 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerDied","Data":"940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2"} Dec 05 15:06:04 crc kubenswrapper[4858]: I1205 15:06:04.827949 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:06:06 crc kubenswrapper[4858]: I1205 15:06:06.834446 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerStarted","Data":"de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0"} Dec 05 15:06:07 crc kubenswrapper[4858]: I1205 15:06:07.850276 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerID="de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0" exitCode=0 Dec 05 15:06:07 crc kubenswrapper[4858]: I1205 15:06:07.850627 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerDied","Data":"de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0"} Dec 05 15:06:08 crc kubenswrapper[4858]: I1205 15:06:08.861917 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerStarted","Data":"b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe"} Dec 05 15:06:08 crc kubenswrapper[4858]: I1205 15:06:08.891561 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjhd4" podStartSLOduration=3.486176152 podStartE2EDuration="6.884449319s" podCreationTimestamp="2025-12-05 15:06:02 +0000 UTC" firstStartedPulling="2025-12-05 15:06:04.822986303 +0000 UTC m=+4173.370584442" lastFinishedPulling="2025-12-05 15:06:08.22125947 +0000 UTC m=+4176.768857609" observedRunningTime="2025-12-05 15:06:08.882974829 +0000 UTC m=+4177.430572968" watchObservedRunningTime="2025-12-05 15:06:08.884449319 +0000 UTC m=+4177.432047478" Dec 05 15:06:12 crc kubenswrapper[4858]: I1205 15:06:12.914137 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:12 crc kubenswrapper[4858]: I1205 15:06:12.914688 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:13 crc kubenswrapper[4858]: I1205 15:06:13.968883 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qjhd4" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="registry-server" probeResult="failure" output=< Dec 05 15:06:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:06:13 crc kubenswrapper[4858]: > Dec 05 15:06:22 crc kubenswrapper[4858]: I1205 15:06:22.964696 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:23 crc kubenswrapper[4858]: I1205 15:06:23.013139 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:23 crc kubenswrapper[4858]: I1205 15:06:23.203065 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjhd4"] Dec 05 15:06:23 crc kubenswrapper[4858]: I1205 15:06:23.996135 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qjhd4" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="registry-server" containerID="cri-o://b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe" gracePeriod=2 Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.666142 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.811250 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-utilities\") pod \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.811313 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-catalog-content\") pod \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.811475 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8b9n\" (UniqueName: \"kubernetes.io/projected/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-kube-api-access-c8b9n\") pod \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\" (UID: \"9c69501a-1f1f-4136-8756-3ae2d8a72f4e\") " Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.812810 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-utilities" (OuterVolumeSpecName: "utilities") pod "9c69501a-1f1f-4136-8756-3ae2d8a72f4e" (UID: "9c69501a-1f1f-4136-8756-3ae2d8a72f4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.820917 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-kube-api-access-c8b9n" (OuterVolumeSpecName: "kube-api-access-c8b9n") pod "9c69501a-1f1f-4136-8756-3ae2d8a72f4e" (UID: "9c69501a-1f1f-4136-8756-3ae2d8a72f4e"). InnerVolumeSpecName "kube-api-access-c8b9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.912654 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c69501a-1f1f-4136-8756-3ae2d8a72f4e" (UID: "9c69501a-1f1f-4136-8756-3ae2d8a72f4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.913914 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8b9n\" (UniqueName: \"kubernetes.io/projected/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-kube-api-access-c8b9n\") on node \"crc\" DevicePath \"\"" Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.913955 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:06:24 crc kubenswrapper[4858]: I1205 15:06:24.913970 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c69501a-1f1f-4136-8756-3ae2d8a72f4e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.001373 4858 generic.go:334] "Generic (PLEG): container finished" podID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerID="b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe" exitCode=0 Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.001427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerDied","Data":"b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe"} Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.001466 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjhd4" event={"ID":"9c69501a-1f1f-4136-8756-3ae2d8a72f4e","Type":"ContainerDied","Data":"cc4a84b37942181f63b1632ea9e0d5a4cb4404636ae789ed93c30d0c411f21eb"} Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.001456 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjhd4" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.001887 4858 scope.go:117] "RemoveContainer" containerID="b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.037602 4858 scope.go:117] "RemoveContainer" containerID="de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.062493 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjhd4"] Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.070892 4858 scope.go:117] "RemoveContainer" containerID="940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.080584 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qjhd4"] Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.154626 4858 scope.go:117] "RemoveContainer" containerID="b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe" Dec 05 15:06:25 crc kubenswrapper[4858]: E1205 15:06:25.163623 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe\": container with ID starting with b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe not found: ID does not exist" containerID="b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.163851 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe"} err="failed to get container status \"b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe\": rpc error: code = NotFound desc = could not find container \"b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe\": container with ID starting with b7a49daa8fa808b62bfbfcd76329a10feab9b10b5b6df6c1cbbf4164a912f8fe not found: ID does not exist" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.163890 4858 scope.go:117] "RemoveContainer" containerID="de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0" Dec 05 15:06:25 crc kubenswrapper[4858]: E1205 15:06:25.167283 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0\": container with ID starting with de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0 not found: ID does not exist" containerID="de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.167330 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0"} err="failed to get container status \"de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0\": rpc error: code = NotFound desc = could not find container \"de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0\": container with ID starting with de3823e717a6961ee7bd722426cfd5261be855d00c98df2e0cfb618262405de0 not found: ID does not exist" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.167356 4858 scope.go:117] "RemoveContainer" containerID="940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2" Dec 05 15:06:25 crc kubenswrapper[4858]: E1205 15:06:25.171170 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2\": container with ID starting with 940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2 not found: ID does not exist" containerID="940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.171210 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2"} err="failed to get container status \"940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2\": rpc error: code = NotFound desc = could not find container \"940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2\": container with ID starting with 940c887706fd7294be00171c36a87d8f2b71e908d925f79ed264fa05b30985e2 not found: ID does not exist" Dec 05 15:06:25 crc kubenswrapper[4858]: I1205 15:06:25.910534 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" path="/var/lib/kubelet/pods/9c69501a-1f1f-4136-8756-3ae2d8a72f4e/volumes" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.067730 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c24f6"] Dec 05 15:07:37 crc kubenswrapper[4858]: E1205 15:07:37.070051 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="registry-server" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.070080 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="registry-server" Dec 05 15:07:37 crc kubenswrapper[4858]: E1205 15:07:37.070128 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="extract-utilities" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.070137 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="extract-utilities" Dec 05 15:07:37 crc kubenswrapper[4858]: E1205 15:07:37.070166 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="extract-content" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.070174 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="extract-content" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.070703 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c69501a-1f1f-4136-8756-3ae2d8a72f4e" containerName="registry-server" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.073255 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.164677 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c24f6"] Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.270821 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4857\" (UniqueName: \"kubernetes.io/projected/328dbe04-94df-44fc-85d1-4e226badd68e-kube-api-access-k4857\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.271124 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-catalog-content\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.271413 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-utilities\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.373226 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4857\" (UniqueName: \"kubernetes.io/projected/328dbe04-94df-44fc-85d1-4e226badd68e-kube-api-access-k4857\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.373311 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-catalog-content\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.373389 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-utilities\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.375668 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-catalog-content\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.376228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-utilities\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.498348 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4857\" (UniqueName: \"kubernetes.io/projected/328dbe04-94df-44fc-85d1-4e226badd68e-kube-api-access-k4857\") pod \"redhat-operators-c24f6\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:37 crc kubenswrapper[4858]: I1205 15:07:37.693572 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:38 crc kubenswrapper[4858]: I1205 15:07:38.355704 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c24f6"] Dec 05 15:07:38 crc kubenswrapper[4858]: I1205 15:07:38.679906 4858 generic.go:334] "Generic (PLEG): container finished" podID="328dbe04-94df-44fc-85d1-4e226badd68e" containerID="476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca" exitCode=0 Dec 05 15:07:38 crc kubenswrapper[4858]: I1205 15:07:38.680034 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerDied","Data":"476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca"} Dec 05 15:07:38 crc kubenswrapper[4858]: I1205 15:07:38.680312 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerStarted","Data":"1f2ab7474316f0e4e7a33543af902d6edc68d8dce8836f0ae3d4ef478975f4dc"} Dec 05 15:07:39 crc kubenswrapper[4858]: I1205 15:07:39.692019 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerStarted","Data":"1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109"} Dec 05 15:07:42 crc kubenswrapper[4858]: I1205 15:07:42.727337 4858 generic.go:334] "Generic (PLEG): container finished" podID="328dbe04-94df-44fc-85d1-4e226badd68e" containerID="1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109" exitCode=0 Dec 05 15:07:42 crc kubenswrapper[4858]: I1205 15:07:42.727436 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerDied","Data":"1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109"} Dec 05 15:07:43 crc kubenswrapper[4858]: I1205 15:07:43.738367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerStarted","Data":"547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597"} Dec 05 15:07:43 crc kubenswrapper[4858]: I1205 15:07:43.784235 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c24f6" podStartSLOduration=2.25370462 podStartE2EDuration="6.779956325s" podCreationTimestamp="2025-12-05 15:07:37 +0000 UTC" firstStartedPulling="2025-12-05 15:07:38.682389444 +0000 UTC m=+4267.229987583" lastFinishedPulling="2025-12-05 15:07:43.208641149 +0000 UTC m=+4271.756239288" observedRunningTime="2025-12-05 15:07:43.776171752 +0000 UTC m=+4272.323769901" watchObservedRunningTime="2025-12-05 15:07:43.779956325 +0000 UTC m=+4272.327554464" Dec 05 15:07:47 crc kubenswrapper[4858]: I1205 15:07:47.694012 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:47 crc kubenswrapper[4858]: I1205 15:07:47.694610 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:48 crc kubenswrapper[4858]: I1205 15:07:48.743969 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c24f6" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="registry-server" probeResult="failure" output=< Dec 05 15:07:48 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:07:48 crc kubenswrapper[4858]: > Dec 05 15:07:57 crc kubenswrapper[4858]: I1205 15:07:57.744171 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:57 crc kubenswrapper[4858]: I1205 15:07:57.796909 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:57 crc kubenswrapper[4858]: I1205 15:07:57.985600 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c24f6"] Dec 05 15:07:58 crc kubenswrapper[4858]: I1205 15:07:58.904135 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c24f6" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="registry-server" containerID="cri-o://547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597" gracePeriod=2 Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.811471 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.913298 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-utilities\") pod \"328dbe04-94df-44fc-85d1-4e226badd68e\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.913430 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4857\" (UniqueName: \"kubernetes.io/projected/328dbe04-94df-44fc-85d1-4e226badd68e-kube-api-access-k4857\") pod \"328dbe04-94df-44fc-85d1-4e226badd68e\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.913510 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-catalog-content\") pod \"328dbe04-94df-44fc-85d1-4e226badd68e\" (UID: \"328dbe04-94df-44fc-85d1-4e226badd68e\") " Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.915048 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-utilities" (OuterVolumeSpecName: "utilities") pod "328dbe04-94df-44fc-85d1-4e226badd68e" (UID: "328dbe04-94df-44fc-85d1-4e226badd68e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.918674 4858 generic.go:334] "Generic (PLEG): container finished" podID="328dbe04-94df-44fc-85d1-4e226badd68e" containerID="547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597" exitCode=0 Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.918711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerDied","Data":"547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597"} Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.918739 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c24f6" event={"ID":"328dbe04-94df-44fc-85d1-4e226badd68e","Type":"ContainerDied","Data":"1f2ab7474316f0e4e7a33543af902d6edc68d8dce8836f0ae3d4ef478975f4dc"} Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.918782 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c24f6" Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.919051 4858 scope.go:117] "RemoveContainer" containerID="547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597" Dec 05 15:07:59 crc kubenswrapper[4858]: I1205 15:07:59.928399 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328dbe04-94df-44fc-85d1-4e226badd68e-kube-api-access-k4857" (OuterVolumeSpecName: "kube-api-access-k4857") pod "328dbe04-94df-44fc-85d1-4e226badd68e" (UID: "328dbe04-94df-44fc-85d1-4e226badd68e"). InnerVolumeSpecName "kube-api-access-k4857". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.001535 4858 scope.go:117] "RemoveContainer" containerID="1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.015943 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.016276 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4857\" (UniqueName: \"kubernetes.io/projected/328dbe04-94df-44fc-85d1-4e226badd68e-kube-api-access-k4857\") on node \"crc\" DevicePath \"\"" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.020059 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328dbe04-94df-44fc-85d1-4e226badd68e" (UID: "328dbe04-94df-44fc-85d1-4e226badd68e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.119430 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328dbe04-94df-44fc-85d1-4e226badd68e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.267075 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c24f6"] Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.275738 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c24f6"] Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.424868 4858 scope.go:117] "RemoveContainer" containerID="476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.488387 4858 scope.go:117] "RemoveContainer" containerID="547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597" Dec 05 15:08:00 crc kubenswrapper[4858]: E1205 15:08:00.489739 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597\": container with ID starting with 547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597 not found: ID does not exist" containerID="547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.490015 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597"} err="failed to get container status \"547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597\": rpc error: code = NotFound desc = could not find container \"547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597\": container with ID starting with 547f071d38868370177e21545319a6c988244b53e20e14a2a2c445f15c083597 not found: ID does not exist" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.490063 4858 scope.go:117] "RemoveContainer" containerID="1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109" Dec 05 15:08:00 crc kubenswrapper[4858]: E1205 15:08:00.490536 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109\": container with ID starting with 1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109 not found: ID does not exist" containerID="1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.490571 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109"} err="failed to get container status \"1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109\": rpc error: code = NotFound desc = could not find container \"1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109\": container with ID starting with 1482001efa72890b02c2587d8af340c26846b5c385ac994522ce04b2a17c7109 not found: ID does not exist" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.490602 4858 scope.go:117] "RemoveContainer" containerID="476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca" Dec 05 15:08:00 crc kubenswrapper[4858]: E1205 15:08:00.490951 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca\": container with ID starting with 476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca not found: ID does not exist" containerID="476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca" Dec 05 15:08:00 crc kubenswrapper[4858]: I1205 15:08:00.490986 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca"} err="failed to get container status \"476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca\": rpc error: code = NotFound desc = could not find container \"476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca\": container with ID starting with 476392ec89c0a885a278f539f6bb574a107b6f0043b34ca5f85284140f0f5eca not found: ID does not exist" Dec 05 15:08:01 crc kubenswrapper[4858]: I1205 15:08:01.923738 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" path="/var/lib/kubelet/pods/328dbe04-94df-44fc-85d1-4e226badd68e/volumes" Dec 05 15:08:14 crc kubenswrapper[4858]: I1205 15:08:14.760275 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:08:14 crc kubenswrapper[4858]: I1205 15:08:14.761053 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:08:44 crc kubenswrapper[4858]: I1205 15:08:44.760394 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:08:44 crc kubenswrapper[4858]: I1205 15:08:44.760968 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:09:14 crc kubenswrapper[4858]: I1205 15:09:14.760542 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:09:14 crc kubenswrapper[4858]: I1205 15:09:14.761293 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:09:14 crc kubenswrapper[4858]: I1205 15:09:14.761342 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:09:14 crc kubenswrapper[4858]: I1205 15:09:14.762237 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"021c14b25ca6f3d8523eebe7dd3ab092a2189f3fba31c3edcbb7e7e6ad2db62a"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:09:14 crc kubenswrapper[4858]: I1205 15:09:14.762298 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://021c14b25ca6f3d8523eebe7dd3ab092a2189f3fba31c3edcbb7e7e6ad2db62a" gracePeriod=600 Dec 05 15:09:15 crc kubenswrapper[4858]: I1205 15:09:15.572320 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="021c14b25ca6f3d8523eebe7dd3ab092a2189f3fba31c3edcbb7e7e6ad2db62a" exitCode=0 Dec 05 15:09:15 crc kubenswrapper[4858]: I1205 15:09:15.573027 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"021c14b25ca6f3d8523eebe7dd3ab092a2189f3fba31c3edcbb7e7e6ad2db62a"} Dec 05 15:09:15 crc kubenswrapper[4858]: I1205 15:09:15.573059 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55"} Dec 05 15:09:15 crc kubenswrapper[4858]: I1205 15:09:15.573077 4858 scope.go:117] "RemoveContainer" containerID="e7278b9b1b23e13f6ff93a0a5d5dcc06fde64a4ad88c4933984de60ace978003" Dec 05 15:11:04 crc kubenswrapper[4858]: I1205 15:11:04.890712 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:11:04 crc kubenswrapper[4858]: I1205 15:11:04.891679 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:11:04 crc kubenswrapper[4858]: I1205 15:11:04.893346 4858 patch_prober.go:28] interesting pod/router-default-5444994796-kmzj6 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:11:04 crc kubenswrapper[4858]: I1205 15:11:04.893385 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kmzj6" podUID="43c50736-3414-483f-8104-cefb05d4552c" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:11:25 crc kubenswrapper[4858]: I1205 15:11:25.082536 4858 generic.go:334] "Generic (PLEG): container finished" podID="2e4134d1-108e-42bc-81a5-7704e6dff1d2" containerID="b8fd651619c60c9da949e803155a4eea9a0af4412035cf97531d46cb34f28bb9" exitCode=1 Dec 05 15:11:25 crc kubenswrapper[4858]: I1205 15:11:25.082593 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"2e4134d1-108e-42bc-81a5-7704e6dff1d2","Type":"ContainerDied","Data":"b8fd651619c60c9da949e803155a4eea9a0af4412035cf97531d46cb34f28bb9"} Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.632459 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.783311 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.783540 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-temporary\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.783610 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-config-data\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.783721 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-workdir\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.783890 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ca-certs\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.783986 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.784070 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj622\" (UniqueName: \"kubernetes.io/projected/2e4134d1-108e-42bc-81a5-7704e6dff1d2-kube-api-access-wj622\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.784189 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ssh-key\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.785739 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config-secret\") pod \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\" (UID: \"2e4134d1-108e-42bc-81a5-7704e6dff1d2\") " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.786259 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.787502 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.792116 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.792585 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-config-data" (OuterVolumeSpecName: "config-data") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: E1205 15:11:26.795745 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4134d1-108e-42bc-81a5-7704e6dff1d2" containerName="tempest-tests-tempest-tests-runner" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.795795 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4134d1-108e-42bc-81a5-7704e6dff1d2" containerName="tempest-tests-tempest-tests-runner" Dec 05 15:11:26 crc kubenswrapper[4858]: E1205 15:11:26.795857 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="registry-server" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.795869 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="registry-server" Dec 05 15:11:26 crc kubenswrapper[4858]: E1205 15:11:26.795887 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="extract-content" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.795896 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="extract-content" Dec 05 15:11:26 crc kubenswrapper[4858]: E1205 15:11:26.795912 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="extract-utilities" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.795922 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="extract-utilities" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.796430 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4134d1-108e-42bc-81a5-7704e6dff1d2" containerName="tempest-tests-tempest-tests-runner" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.796455 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="328dbe04-94df-44fc-85d1-4e226badd68e" containerName="registry-server" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.813126 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.817594 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.821791 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.828426 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.836403 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4134d1-108e-42bc-81a5-7704e6dff1d2-kube-api-access-wj622" (OuterVolumeSpecName: "kube-api-access-wj622") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "kube-api-access-wj622". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.851692 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.853994 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.860195 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.862427 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.880303 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.888987 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.889018 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4134d1-108e-42bc-81a5-7704e6dff1d2-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.889026 4858 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.889062 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.889073 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj622\" (UniqueName: \"kubernetes.io/projected/2e4134d1-108e-42bc-81a5-7704e6dff1d2-kube-api-access-wj622\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.889082 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.889093 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e4134d1-108e-42bc-81a5-7704e6dff1d2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.909969 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.916243 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2e4134d1-108e-42bc-81a5-7704e6dff1d2" (UID: "2e4134d1-108e-42bc-81a5-7704e6dff1d2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.991465 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.991733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.991889 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.991978 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.992099 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.992190 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.992275 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qlg\" (UniqueName: \"kubernetes.io/projected/2dc2f8c9-4ac5-4830-bf63-168798f46840-kube-api-access-m5qlg\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.992388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.992567 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.992686 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2e4134d1-108e-42bc-81a5-7704e6dff1d2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 05 15:11:26 crc kubenswrapper[4858]: I1205 15:11:26.993494 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.022040 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094488 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qlg\" (UniqueName: \"kubernetes.io/projected/2dc2f8c9-4ac5-4830-bf63-168798f46840-kube-api-access-m5qlg\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094567 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094633 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094672 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094697 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094732 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.094755 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.096299 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.096683 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.097418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.098367 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.100574 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.102015 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.102896 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"2e4134d1-108e-42bc-81a5-7704e6dff1d2","Type":"ContainerDied","Data":"9b8d93875e94f8c82d6ea5e6ae892756808364ff368e1764496a35f2dbc56036"} Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.102929 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8d93875e94f8c82d6ea5e6ae892756808364ff368e1764496a35f2dbc56036" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.102990 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.109135 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.112933 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qlg\" (UniqueName: \"kubernetes.io/projected/2dc2f8c9-4ac5-4830-bf63-168798f46840-kube-api-access-m5qlg\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:27 crc kubenswrapper[4858]: I1205 15:11:27.280405 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 15:11:28 crc kubenswrapper[4858]: I1205 15:11:28.136359 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Dec 05 15:11:29 crc kubenswrapper[4858]: I1205 15:11:29.148771 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2dc2f8c9-4ac5-4830-bf63-168798f46840","Type":"ContainerStarted","Data":"38cd4dc57a7aa42445690485da9b774c015810598856edb800825ab0872dee87"} Dec 05 15:11:30 crc kubenswrapper[4858]: I1205 15:11:30.158486 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2dc2f8c9-4ac5-4830-bf63-168798f46840","Type":"ContainerStarted","Data":"9433a658d406b33b0a6180ff141b6227da9fe9cb941dc525a912a73b30acdf8e"} Dec 05 15:11:30 crc kubenswrapper[4858]: I1205 15:11:30.183684 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=4.183429529 podStartE2EDuration="4.183429529s" podCreationTimestamp="2025-12-05 15:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 15:11:30.172291068 +0000 UTC m=+4498.719889207" watchObservedRunningTime="2025-12-05 15:11:30.183429529 +0000 UTC m=+4498.731027668" Dec 05 15:11:44 crc kubenswrapper[4858]: I1205 15:11:44.760033 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:11:44 crc kubenswrapper[4858]: I1205 15:11:44.760487 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:12:14 crc kubenswrapper[4858]: I1205 15:12:14.760504 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:12:14 crc kubenswrapper[4858]: I1205 15:12:14.761203 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.108988 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-766f4465bf-nsk26"] Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.111496 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.129392 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766f4465bf-nsk26"] Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143353 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-internal-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143402 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-combined-ca-bundle\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143449 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-httpd-config\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143468 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9fj\" (UniqueName: \"kubernetes.io/projected/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-kube-api-access-6c9fj\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143569 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-config\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-ovndb-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.143621 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-public-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244292 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-httpd-config\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244341 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9fj\" (UniqueName: \"kubernetes.io/projected/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-kube-api-access-6c9fj\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244426 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-config\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244446 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-ovndb-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-public-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244523 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-internal-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.244540 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-combined-ca-bundle\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.260587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-internal-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.260762 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-ovndb-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.261120 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-public-tls-certs\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.262152 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-httpd-config\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.263838 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-combined-ca-bundle\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.263948 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-config\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.264508 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9fj\" (UniqueName: \"kubernetes.io/projected/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-kube-api-access-6c9fj\") pod \"neutron-766f4465bf-nsk26\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:20 crc kubenswrapper[4858]: I1205 15:12:20.443343 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:21 crc kubenswrapper[4858]: I1205 15:12:21.035627 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766f4465bf-nsk26"] Dec 05 15:12:21 crc kubenswrapper[4858]: I1205 15:12:21.629976 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f4465bf-nsk26" event={"ID":"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b","Type":"ContainerStarted","Data":"6abb36cecb5ed2e30e90590c773e29c3064b3f212d88b8aa9308162d625a0c26"} Dec 05 15:12:21 crc kubenswrapper[4858]: I1205 15:12:21.630211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f4465bf-nsk26" event={"ID":"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b","Type":"ContainerStarted","Data":"c36c3837d0105467715ced1dd7c74240da14da3530f6090d32afbc0607ecee27"} Dec 05 15:12:21 crc kubenswrapper[4858]: I1205 15:12:21.630246 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f4465bf-nsk26" event={"ID":"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b","Type":"ContainerStarted","Data":"50e7fe74aa3fcba3f6272dfe1f043a8ce3c3132be166e9d07b51a888e229ea2d"} Dec 05 15:12:21 crc kubenswrapper[4858]: I1205 15:12:21.630276 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:21 crc kubenswrapper[4858]: I1205 15:12:21.654470 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-766f4465bf-nsk26" podStartSLOduration=1.654449181 podStartE2EDuration="1.654449181s" podCreationTimestamp="2025-12-05 15:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 15:12:21.645989632 +0000 UTC m=+4550.193587791" watchObservedRunningTime="2025-12-05 15:12:21.654449181 +0000 UTC m=+4550.202047320" Dec 05 15:12:44 crc kubenswrapper[4858]: I1205 15:12:44.760347 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:12:44 crc kubenswrapper[4858]: I1205 15:12:44.760875 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:12:44 crc kubenswrapper[4858]: I1205 15:12:44.760921 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:12:44 crc kubenswrapper[4858]: I1205 15:12:44.761681 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:12:44 crc kubenswrapper[4858]: I1205 15:12:44.761732 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" gracePeriod=600 Dec 05 15:12:44 crc kubenswrapper[4858]: E1205 15:12:44.893269 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:12:45 crc kubenswrapper[4858]: I1205 15:12:45.856300 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" exitCode=0 Dec 05 15:12:45 crc kubenswrapper[4858]: I1205 15:12:45.856999 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55"} Dec 05 15:12:45 crc kubenswrapper[4858]: I1205 15:12:45.857097 4858 scope.go:117] "RemoveContainer" containerID="021c14b25ca6f3d8523eebe7dd3ab092a2189f3fba31c3edcbb7e7e6ad2db62a" Dec 05 15:12:45 crc kubenswrapper[4858]: I1205 15:12:45.861081 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:12:45 crc kubenswrapper[4858]: E1205 15:12:45.862067 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:12:50 crc kubenswrapper[4858]: I1205 15:12:50.459679 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-766f4465bf-nsk26" Dec 05 15:12:50 crc kubenswrapper[4858]: I1205 15:12:50.550696 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-794c5555d9-m4bnj"] Dec 05 15:12:50 crc kubenswrapper[4858]: I1205 15:12:50.551354 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-794c5555d9-m4bnj" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-api" containerID="cri-o://e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f" gracePeriod=30 Dec 05 15:12:50 crc kubenswrapper[4858]: I1205 15:12:50.551659 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-794c5555d9-m4bnj" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-httpd" containerID="cri-o://1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351" gracePeriod=30 Dec 05 15:12:50 crc kubenswrapper[4858]: I1205 15:12:50.901578 4858 generic.go:334] "Generic (PLEG): container finished" podID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerID="1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351" exitCode=0 Dec 05 15:12:50 crc kubenswrapper[4858]: I1205 15:12:50.901622 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794c5555d9-m4bnj" event={"ID":"3b098e12-08af-4c9f-8c3c-851b91c2e8a6","Type":"ContainerDied","Data":"1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351"} Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.715323 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881567 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-internal-tls-certs\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881750 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-ovndb-tls-certs\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881825 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-combined-ca-bundle\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881861 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-httpd-config\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881895 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-config\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881931 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnh42\" (UniqueName: \"kubernetes.io/projected/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-kube-api-access-dnh42\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.881963 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-public-tls-certs\") pod \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\" (UID: \"3b098e12-08af-4c9f-8c3c-851b91c2e8a6\") " Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.887905 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.888610 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-kube-api-access-dnh42" (OuterVolumeSpecName: "kube-api-access-dnh42") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "kube-api-access-dnh42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.934498 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-config" (OuterVolumeSpecName: "config") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.938953 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.939703 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.948764 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.960754 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3b098e12-08af-4c9f-8c3c-851b91c2e8a6" (UID: "3b098e12-08af-4c9f-8c3c-851b91c2e8a6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.973453 4858 generic.go:334] "Generic (PLEG): container finished" podID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerID="e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f" exitCode=0 Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.973495 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794c5555d9-m4bnj" event={"ID":"3b098e12-08af-4c9f-8c3c-851b91c2e8a6","Type":"ContainerDied","Data":"e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f"} Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.973524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-794c5555d9-m4bnj" event={"ID":"3b098e12-08af-4c9f-8c3c-851b91c2e8a6","Type":"ContainerDied","Data":"7f74b9034cc927e1c59b431f4bd707d0c3b9008c6aa46a94482eb3456f19048f"} Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.973540 4858 scope.go:117] "RemoveContainer" containerID="1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.973608 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-794c5555d9-m4bnj" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.984909 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.984959 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.984977 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.984994 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-config\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.985011 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnh42\" (UniqueName: \"kubernetes.io/projected/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-kube-api-access-dnh42\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.985028 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:56 crc kubenswrapper[4858]: I1205 15:12:56.985043 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b098e12-08af-4c9f-8c3c-851b91c2e8a6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.019755 4858 scope.go:117] "RemoveContainer" containerID="e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.043020 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-794c5555d9-m4bnj"] Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.050385 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-794c5555d9-m4bnj"] Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.061072 4858 scope.go:117] "RemoveContainer" containerID="1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351" Dec 05 15:12:57 crc kubenswrapper[4858]: E1205 15:12:57.062967 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351\": container with ID starting with 1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351 not found: ID does not exist" containerID="1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.063015 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351"} err="failed to get container status \"1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351\": rpc error: code = NotFound desc = could not find container \"1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351\": container with ID starting with 1621905962a856be46038b7775775af0f7572538178d6cc9719111c527060351 not found: ID does not exist" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.063044 4858 scope.go:117] "RemoveContainer" containerID="e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f" Dec 05 15:12:57 crc kubenswrapper[4858]: E1205 15:12:57.065045 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f\": container with ID starting with e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f not found: ID does not exist" containerID="e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.065090 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f"} err="failed to get container status \"e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f\": rpc error: code = NotFound desc = could not find container \"e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f\": container with ID starting with e0d7b60d757addc64d88df9c02dc700465e845484f92f823eb75c25c4732294f not found: ID does not exist" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.899739 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:12:57 crc kubenswrapper[4858]: E1205 15:12:57.900345 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:12:57 crc kubenswrapper[4858]: I1205 15:12:57.910333 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" path="/var/lib/kubelet/pods/3b098e12-08af-4c9f-8c3c-851b91c2e8a6/volumes" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.795971 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrz92"] Dec 05 15:13:00 crc kubenswrapper[4858]: E1205 15:13:00.796631 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-api" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.796644 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-api" Dec 05 15:13:00 crc kubenswrapper[4858]: E1205 15:13:00.796672 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-httpd" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.796678 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-httpd" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.796897 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-httpd" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.796913 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-api" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.798277 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.821952 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrz92"] Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.950805 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-catalog-content\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.951174 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcnmp\" (UniqueName: \"kubernetes.io/projected/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-kube-api-access-lcnmp\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:00 crc kubenswrapper[4858]: I1205 15:13:00.951200 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-utilities\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.052982 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcnmp\" (UniqueName: \"kubernetes.io/projected/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-kube-api-access-lcnmp\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.053043 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-utilities\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.053567 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-utilities\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.054474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-catalog-content\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.054912 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-catalog-content\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.072326 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcnmp\" (UniqueName: \"kubernetes.io/projected/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-kube-api-access-lcnmp\") pod \"certified-operators-lrz92\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.132152 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:01 crc kubenswrapper[4858]: I1205 15:13:01.572443 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrz92"] Dec 05 15:13:02 crc kubenswrapper[4858]: I1205 15:13:02.018923 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerStarted","Data":"61d90470fe5dcec8b073f7a2a6357ef9468a4f5c998482b3585970a9337143d6"} Dec 05 15:13:03 crc kubenswrapper[4858]: I1205 15:13:03.029490 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerID="e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898" exitCode=0 Dec 05 15:13:03 crc kubenswrapper[4858]: I1205 15:13:03.029545 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerDied","Data":"e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898"} Dec 05 15:13:03 crc kubenswrapper[4858]: I1205 15:13:03.032551 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:13:05 crc kubenswrapper[4858]: I1205 15:13:05.048752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerStarted","Data":"31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b"} Dec 05 15:13:07 crc kubenswrapper[4858]: I1205 15:13:07.066984 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerID="31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b" exitCode=0 Dec 05 15:13:07 crc kubenswrapper[4858]: I1205 15:13:07.067172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerDied","Data":"31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b"} Dec 05 15:13:08 crc kubenswrapper[4858]: I1205 15:13:08.083236 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerStarted","Data":"f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00"} Dec 05 15:13:08 crc kubenswrapper[4858]: I1205 15:13:08.108558 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrz92" podStartSLOduration=3.696576216 podStartE2EDuration="8.108533064s" podCreationTimestamp="2025-12-05 15:13:00 +0000 UTC" firstStartedPulling="2025-12-05 15:13:03.031446431 +0000 UTC m=+4591.579044570" lastFinishedPulling="2025-12-05 15:13:07.443403279 +0000 UTC m=+4595.991001418" observedRunningTime="2025-12-05 15:13:08.10392926 +0000 UTC m=+4596.651527409" watchObservedRunningTime="2025-12-05 15:13:08.108533064 +0000 UTC m=+4596.656131203" Dec 05 15:13:09 crc kubenswrapper[4858]: I1205 15:13:09.899908 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:13:09 crc kubenswrapper[4858]: E1205 15:13:09.900346 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:13:11 crc kubenswrapper[4858]: I1205 15:13:11.132606 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:11 crc kubenswrapper[4858]: I1205 15:13:11.132913 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:11 crc kubenswrapper[4858]: I1205 15:13:11.184203 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:12 crc kubenswrapper[4858]: I1205 15:13:12.158562 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:12 crc kubenswrapper[4858]: I1205 15:13:12.206776 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrz92"] Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.131469 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrz92" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="registry-server" containerID="cri-o://f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00" gracePeriod=2 Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.674972 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.853246 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-utilities\") pod \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.853602 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-catalog-content\") pod \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.853690 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcnmp\" (UniqueName: \"kubernetes.io/projected/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-kube-api-access-lcnmp\") pod \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\" (UID: \"2ca1763c-6140-42f0-bf1f-0e52db14fbe1\") " Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.854449 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-utilities" (OuterVolumeSpecName: "utilities") pod "2ca1763c-6140-42f0-bf1f-0e52db14fbe1" (UID: "2ca1763c-6140-42f0-bf1f-0e52db14fbe1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.859273 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-kube-api-access-lcnmp" (OuterVolumeSpecName: "kube-api-access-lcnmp") pod "2ca1763c-6140-42f0-bf1f-0e52db14fbe1" (UID: "2ca1763c-6140-42f0-bf1f-0e52db14fbe1"). InnerVolumeSpecName "kube-api-access-lcnmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.904154 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca1763c-6140-42f0-bf1f-0e52db14fbe1" (UID: "2ca1763c-6140-42f0-bf1f-0e52db14fbe1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.955443 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.955480 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:13:14 crc kubenswrapper[4858]: I1205 15:13:14.955491 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcnmp\" (UniqueName: \"kubernetes.io/projected/2ca1763c-6140-42f0-bf1f-0e52db14fbe1-kube-api-access-lcnmp\") on node \"crc\" DevicePath \"\"" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.142424 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerID="f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00" exitCode=0 Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.142469 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerDied","Data":"f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00"} Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.142527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz92" event={"ID":"2ca1763c-6140-42f0-bf1f-0e52db14fbe1","Type":"ContainerDied","Data":"61d90470fe5dcec8b073f7a2a6357ef9468a4f5c998482b3585970a9337143d6"} Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.142524 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz92" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.142544 4858 scope.go:117] "RemoveContainer" containerID="f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.180528 4858 scope.go:117] "RemoveContainer" containerID="31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.189811 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrz92"] Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.198891 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrz92"] Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.203549 4858 scope.go:117] "RemoveContainer" containerID="e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.260056 4858 scope.go:117] "RemoveContainer" containerID="f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00" Dec 05 15:13:15 crc kubenswrapper[4858]: E1205 15:13:15.260579 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00\": container with ID starting with f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00 not found: ID does not exist" containerID="f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.260719 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00"} err="failed to get container status \"f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00\": rpc error: code = NotFound desc = could not find container \"f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00\": container with ID starting with f337153a0e320cfb5624d159d2bf5319090b43e3c6cd3b671d6c8e08c193dd00 not found: ID does not exist" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.260742 4858 scope.go:117] "RemoveContainer" containerID="31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b" Dec 05 15:13:15 crc kubenswrapper[4858]: E1205 15:13:15.261169 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b\": container with ID starting with 31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b not found: ID does not exist" containerID="31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.261210 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b"} err="failed to get container status \"31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b\": rpc error: code = NotFound desc = could not find container \"31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b\": container with ID starting with 31b23024a4e92e5f2bec8d890cb2f0d35bbaf5dc4982052d44735c16fdd7762b not found: ID does not exist" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.261236 4858 scope.go:117] "RemoveContainer" containerID="e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898" Dec 05 15:13:15 crc kubenswrapper[4858]: E1205 15:13:15.261560 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898\": container with ID starting with e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898 not found: ID does not exist" containerID="e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.261587 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898"} err="failed to get container status \"e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898\": rpc error: code = NotFound desc = could not find container \"e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898\": container with ID starting with e3f436415907e54381d9297b347454a89ad2e665433c282e3069b7c053e8b898 not found: ID does not exist" Dec 05 15:13:15 crc kubenswrapper[4858]: I1205 15:13:15.920320 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" path="/var/lib/kubelet/pods/2ca1763c-6140-42f0-bf1f-0e52db14fbe1/volumes" Dec 05 15:13:22 crc kubenswrapper[4858]: I1205 15:13:22.899450 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:13:22 crc kubenswrapper[4858]: E1205 15:13:22.900101 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:13:26 crc kubenswrapper[4858]: I1205 15:13:26.688698 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-794c5555d9-m4bnj" podUID="3b098e12-08af-4c9f-8c3c-851b91c2e8a6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9696/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:13:37 crc kubenswrapper[4858]: I1205 15:13:37.899164 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:13:37 crc kubenswrapper[4858]: E1205 15:13:37.899772 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:13:51 crc kubenswrapper[4858]: I1205 15:13:51.905203 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:13:51 crc kubenswrapper[4858]: E1205 15:13:51.905913 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:14:05 crc kubenswrapper[4858]: I1205 15:14:05.900463 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:14:05 crc kubenswrapper[4858]: E1205 15:14:05.901219 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:14:18 crc kubenswrapper[4858]: I1205 15:14:18.900475 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:14:18 crc kubenswrapper[4858]: E1205 15:14:18.903967 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:14:32 crc kubenswrapper[4858]: I1205 15:14:32.899892 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:14:32 crc kubenswrapper[4858]: E1205 15:14:32.901735 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:14:45 crc kubenswrapper[4858]: I1205 15:14:45.899085 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:14:45 crc kubenswrapper[4858]: E1205 15:14:45.899893 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:14:59 crc kubenswrapper[4858]: I1205 15:14:59.900089 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:14:59 crc kubenswrapper[4858]: E1205 15:14:59.900878 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.157947 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz"] Dec 05 15:15:00 crc kubenswrapper[4858]: E1205 15:15:00.158587 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="extract-utilities" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.158604 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="extract-utilities" Dec 05 15:15:00 crc kubenswrapper[4858]: E1205 15:15:00.158634 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="extract-content" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.158640 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="extract-content" Dec 05 15:15:00 crc kubenswrapper[4858]: E1205 15:15:00.158650 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="registry-server" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.158656 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="registry-server" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.158880 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca1763c-6140-42f0-bf1f-0e52db14fbe1" containerName="registry-server" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.160002 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.162429 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.162859 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.202244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz"] Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.252270 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12eda759-b210-484c-872f-f79d16e87084-config-volume\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.252347 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12eda759-b210-484c-872f-f79d16e87084-secret-volume\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.252460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gzmr\" (UniqueName: \"kubernetes.io/projected/12eda759-b210-484c-872f-f79d16e87084-kube-api-access-6gzmr\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.353651 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12eda759-b210-484c-872f-f79d16e87084-config-volume\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.353711 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12eda759-b210-484c-872f-f79d16e87084-secret-volume\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.353787 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gzmr\" (UniqueName: \"kubernetes.io/projected/12eda759-b210-484c-872f-f79d16e87084-kube-api-access-6gzmr\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.354637 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12eda759-b210-484c-872f-f79d16e87084-config-volume\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.360053 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12eda759-b210-484c-872f-f79d16e87084-secret-volume\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.372999 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gzmr\" (UniqueName: \"kubernetes.io/projected/12eda759-b210-484c-872f-f79d16e87084-kube-api-access-6gzmr\") pod \"collect-profiles-29415795-wxsvz\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:00 crc kubenswrapper[4858]: I1205 15:15:00.480908 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:01 crc kubenswrapper[4858]: I1205 15:15:01.163258 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz"] Dec 05 15:15:02 crc kubenswrapper[4858]: I1205 15:15:02.056128 4858 generic.go:334] "Generic (PLEG): container finished" podID="12eda759-b210-484c-872f-f79d16e87084" containerID="afb30febab676670c687e46555fd9ef3fca58fc1eb16e33bba1e539f79f82413" exitCode=0 Dec 05 15:15:02 crc kubenswrapper[4858]: I1205 15:15:02.056163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" event={"ID":"12eda759-b210-484c-872f-f79d16e87084","Type":"ContainerDied","Data":"afb30febab676670c687e46555fd9ef3fca58fc1eb16e33bba1e539f79f82413"} Dec 05 15:15:02 crc kubenswrapper[4858]: I1205 15:15:02.056441 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" event={"ID":"12eda759-b210-484c-872f-f79d16e87084","Type":"ContainerStarted","Data":"0f995fa841affcc0532118503c70e4a71276602f8326cd03bc7de8bf70653640"} Dec 05 15:15:03 crc kubenswrapper[4858]: I1205 15:15:03.415661 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:03.512759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12eda759-b210-484c-872f-f79d16e87084-config-volume\") pod \"12eda759-b210-484c-872f-f79d16e87084\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:03.512951 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12eda759-b210-484c-872f-f79d16e87084-secret-volume\") pod \"12eda759-b210-484c-872f-f79d16e87084\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:03.513003 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gzmr\" (UniqueName: \"kubernetes.io/projected/12eda759-b210-484c-872f-f79d16e87084-kube-api-access-6gzmr\") pod \"12eda759-b210-484c-872f-f79d16e87084\" (UID: \"12eda759-b210-484c-872f-f79d16e87084\") " Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.200127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eda759-b210-484c-872f-f79d16e87084-config-volume" (OuterVolumeSpecName: "config-volume") pod "12eda759-b210-484c-872f-f79d16e87084" (UID: "12eda759-b210-484c-872f-f79d16e87084"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.214866 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12eda759-b210-484c-872f-f79d16e87084-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.252481 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12eda759-b210-484c-872f-f79d16e87084-kube-api-access-6gzmr" (OuterVolumeSpecName: "kube-api-access-6gzmr") pod "12eda759-b210-484c-872f-f79d16e87084" (UID: "12eda759-b210-484c-872f-f79d16e87084"). InnerVolumeSpecName "kube-api-access-6gzmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.272158 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.304904 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12eda759-b210-484c-872f-f79d16e87084-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12eda759-b210-484c-872f-f79d16e87084" (UID: "12eda759-b210-484c-872f-f79d16e87084"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.347182 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz" event={"ID":"12eda759-b210-484c-872f-f79d16e87084","Type":"ContainerDied","Data":"0f995fa841affcc0532118503c70e4a71276602f8326cd03bc7de8bf70653640"} Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.347566 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f995fa841affcc0532118503c70e4a71276602f8326cd03bc7de8bf70653640" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.370069 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12eda759-b210-484c-872f-f79d16e87084-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.370100 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gzmr\" (UniqueName: \"kubernetes.io/projected/12eda759-b210-484c-872f-f79d16e87084-kube-api-access-6gzmr\") on node \"crc\" DevicePath \"\"" Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.488584 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv"] Dec 05 15:15:04 crc kubenswrapper[4858]: I1205 15:15:04.498166 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415750-tgzbv"] Dec 05 15:15:05 crc kubenswrapper[4858]: I1205 15:15:05.911460 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ca352c-c946-4269-9163-1adaf9364d32" path="/var/lib/kubelet/pods/39ca352c-c946-4269-9163-1adaf9364d32/volumes" Dec 05 15:15:12 crc kubenswrapper[4858]: I1205 15:15:12.899244 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:15:12 crc kubenswrapper[4858]: E1205 15:15:12.899917 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:15:26 crc kubenswrapper[4858]: I1205 15:15:26.900117 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:15:26 crc kubenswrapper[4858]: E1205 15:15:26.900788 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:15:37 crc kubenswrapper[4858]: I1205 15:15:37.899877 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:15:37 crc kubenswrapper[4858]: E1205 15:15:37.900529 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:15:49 crc kubenswrapper[4858]: I1205 15:15:49.899039 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:15:49 crc kubenswrapper[4858]: E1205 15:15:49.899882 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:16:00 crc kubenswrapper[4858]: I1205 15:16:00.899756 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:16:00 crc kubenswrapper[4858]: E1205 15:16:00.900458 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:16:01 crc kubenswrapper[4858]: I1205 15:16:01.480634 4858 scope.go:117] "RemoveContainer" containerID="63041a3304dc05f4f2d720233cafaf2765acc456c3804cc8db2e07c2bf3911fa" Dec 05 15:16:11 crc kubenswrapper[4858]: I1205 15:16:11.907194 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:16:11 crc kubenswrapper[4858]: E1205 15:16:11.907964 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:16:19 crc kubenswrapper[4858]: I1205 15:16:19.893777 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shhm5"] Dec 05 15:16:19 crc kubenswrapper[4858]: E1205 15:16:19.894906 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eda759-b210-484c-872f-f79d16e87084" containerName="collect-profiles" Dec 05 15:16:19 crc kubenswrapper[4858]: I1205 15:16:19.894923 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eda759-b210-484c-872f-f79d16e87084" containerName="collect-profiles" Dec 05 15:16:19 crc kubenswrapper[4858]: I1205 15:16:19.895160 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eda759-b210-484c-872f-f79d16e87084" containerName="collect-profiles" Dec 05 15:16:19 crc kubenswrapper[4858]: I1205 15:16:19.896970 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:19 crc kubenswrapper[4858]: I1205 15:16:19.911935 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shhm5"] Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.071014 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szc62\" (UniqueName: \"kubernetes.io/projected/269051c8-4fbd-4dc2-848d-cd5a758559cb-kube-api-access-szc62\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.071073 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-utilities\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.071154 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-catalog-content\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.172801 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szc62\" (UniqueName: \"kubernetes.io/projected/269051c8-4fbd-4dc2-848d-cd5a758559cb-kube-api-access-szc62\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.172885 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-utilities\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.172956 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-catalog-content\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.173708 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-catalog-content\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.173815 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-utilities\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.199616 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szc62\" (UniqueName: \"kubernetes.io/projected/269051c8-4fbd-4dc2-848d-cd5a758559cb-kube-api-access-szc62\") pod \"community-operators-shhm5\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.219036 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:20 crc kubenswrapper[4858]: W1205 15:16:20.702153 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269051c8_4fbd_4dc2_848d_cd5a758559cb.slice/crio-bc7cb1eae6bb767310fa7f7ecbe5181557eff7afc9e9d4f0b9af26cada708dd6 WatchSource:0}: Error finding container bc7cb1eae6bb767310fa7f7ecbe5181557eff7afc9e9d4f0b9af26cada708dd6: Status 404 returned error can't find the container with id bc7cb1eae6bb767310fa7f7ecbe5181557eff7afc9e9d4f0b9af26cada708dd6 Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.708564 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shhm5"] Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.951095 4858 generic.go:334] "Generic (PLEG): container finished" podID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerID="6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba" exitCode=0 Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.951381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerDied","Data":"6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba"} Dec 05 15:16:20 crc kubenswrapper[4858]: I1205 15:16:20.951413 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerStarted","Data":"bc7cb1eae6bb767310fa7f7ecbe5181557eff7afc9e9d4f0b9af26cada708dd6"} Dec 05 15:16:21 crc kubenswrapper[4858]: I1205 15:16:21.960930 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerStarted","Data":"672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6"} Dec 05 15:16:23 crc kubenswrapper[4858]: I1205 15:16:23.058093 4858 generic.go:334] "Generic (PLEG): container finished" podID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerID="672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6" exitCode=0 Dec 05 15:16:23 crc kubenswrapper[4858]: I1205 15:16:23.059008 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerDied","Data":"672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6"} Dec 05 15:16:25 crc kubenswrapper[4858]: I1205 15:16:25.093912 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerStarted","Data":"3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba"} Dec 05 15:16:25 crc kubenswrapper[4858]: I1205 15:16:25.113575 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shhm5" podStartSLOduration=3.591945242 podStartE2EDuration="6.113554155s" podCreationTimestamp="2025-12-05 15:16:19 +0000 UTC" firstStartedPulling="2025-12-05 15:16:20.953294416 +0000 UTC m=+4789.500892555" lastFinishedPulling="2025-12-05 15:16:23.474903329 +0000 UTC m=+4792.022501468" observedRunningTime="2025-12-05 15:16:25.109427603 +0000 UTC m=+4793.657025742" watchObservedRunningTime="2025-12-05 15:16:25.113554155 +0000 UTC m=+4793.661152294" Dec 05 15:16:26 crc kubenswrapper[4858]: I1205 15:16:26.899690 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:16:26 crc kubenswrapper[4858]: E1205 15:16:26.900227 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:16:30 crc kubenswrapper[4858]: I1205 15:16:30.222095 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:30 crc kubenswrapper[4858]: I1205 15:16:30.222628 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:30 crc kubenswrapper[4858]: I1205 15:16:30.284904 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:31 crc kubenswrapper[4858]: I1205 15:16:31.200266 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:31 crc kubenswrapper[4858]: I1205 15:16:31.251567 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shhm5"] Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.186914 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shhm5" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="registry-server" containerID="cri-o://3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba" gracePeriod=2 Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.677239 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.749564 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-utilities\") pod \"269051c8-4fbd-4dc2-848d-cd5a758559cb\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.749707 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-catalog-content\") pod \"269051c8-4fbd-4dc2-848d-cd5a758559cb\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.749857 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szc62\" (UniqueName: \"kubernetes.io/projected/269051c8-4fbd-4dc2-848d-cd5a758559cb-kube-api-access-szc62\") pod \"269051c8-4fbd-4dc2-848d-cd5a758559cb\" (UID: \"269051c8-4fbd-4dc2-848d-cd5a758559cb\") " Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.750517 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-utilities" (OuterVolumeSpecName: "utilities") pod "269051c8-4fbd-4dc2-848d-cd5a758559cb" (UID: "269051c8-4fbd-4dc2-848d-cd5a758559cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.758213 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269051c8-4fbd-4dc2-848d-cd5a758559cb-kube-api-access-szc62" (OuterVolumeSpecName: "kube-api-access-szc62") pod "269051c8-4fbd-4dc2-848d-cd5a758559cb" (UID: "269051c8-4fbd-4dc2-848d-cd5a758559cb"). InnerVolumeSpecName "kube-api-access-szc62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.821795 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "269051c8-4fbd-4dc2-848d-cd5a758559cb" (UID: "269051c8-4fbd-4dc2-848d-cd5a758559cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.852481 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.852515 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269051c8-4fbd-4dc2-848d-cd5a758559cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:16:33 crc kubenswrapper[4858]: I1205 15:16:33.852529 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szc62\" (UniqueName: \"kubernetes.io/projected/269051c8-4fbd-4dc2-848d-cd5a758559cb-kube-api-access-szc62\") on node \"crc\" DevicePath \"\"" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.198641 4858 generic.go:334] "Generic (PLEG): container finished" podID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerID="3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba" exitCode=0 Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.198684 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerDied","Data":"3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba"} Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.198719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shhm5" event={"ID":"269051c8-4fbd-4dc2-848d-cd5a758559cb","Type":"ContainerDied","Data":"bc7cb1eae6bb767310fa7f7ecbe5181557eff7afc9e9d4f0b9af26cada708dd6"} Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.198736 4858 scope.go:117] "RemoveContainer" containerID="3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.199616 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shhm5" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.227914 4858 scope.go:117] "RemoveContainer" containerID="672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.236962 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shhm5"] Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.246095 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shhm5"] Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.253064 4858 scope.go:117] "RemoveContainer" containerID="6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.299226 4858 scope.go:117] "RemoveContainer" containerID="3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba" Dec 05 15:16:34 crc kubenswrapper[4858]: E1205 15:16:34.300135 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba\": container with ID starting with 3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba not found: ID does not exist" containerID="3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.300167 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba"} err="failed to get container status \"3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba\": rpc error: code = NotFound desc = could not find container \"3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba\": container with ID starting with 3d77bbd59965a4f22f4b3e36ff1960d3e562550288862d9aa1f8509aa46c9dba not found: ID does not exist" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.300194 4858 scope.go:117] "RemoveContainer" containerID="672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6" Dec 05 15:16:34 crc kubenswrapper[4858]: E1205 15:16:34.300447 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6\": container with ID starting with 672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6 not found: ID does not exist" containerID="672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.300470 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6"} err="failed to get container status \"672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6\": rpc error: code = NotFound desc = could not find container \"672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6\": container with ID starting with 672b1b492ccd8e00f54a361d1149aadea7c6a5ebe8b9eb2a8087e2561c8dd9e6 not found: ID does not exist" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.300484 4858 scope.go:117] "RemoveContainer" containerID="6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba" Dec 05 15:16:34 crc kubenswrapper[4858]: E1205 15:16:34.301008 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba\": container with ID starting with 6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba not found: ID does not exist" containerID="6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba" Dec 05 15:16:34 crc kubenswrapper[4858]: I1205 15:16:34.301038 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba"} err="failed to get container status \"6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba\": rpc error: code = NotFound desc = could not find container \"6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba\": container with ID starting with 6d77a2edb8eb01a8029475586abb64ebd7d0a32d2221dc22ca0af8598e7475ba not found: ID does not exist" Dec 05 15:16:35 crc kubenswrapper[4858]: I1205 15:16:35.915751 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" path="/var/lib/kubelet/pods/269051c8-4fbd-4dc2-848d-cd5a758559cb/volumes" Dec 05 15:16:38 crc kubenswrapper[4858]: I1205 15:16:38.899980 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:16:38 crc kubenswrapper[4858]: E1205 15:16:38.900584 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.106616 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c8qjv"] Dec 05 15:16:52 crc kubenswrapper[4858]: E1205 15:16:52.107634 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="extract-utilities" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.107648 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="extract-utilities" Dec 05 15:16:52 crc kubenswrapper[4858]: E1205 15:16:52.107659 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="registry-server" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.107666 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="registry-server" Dec 05 15:16:52 crc kubenswrapper[4858]: E1205 15:16:52.107677 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="extract-content" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.107685 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="extract-content" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.107920 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="269051c8-4fbd-4dc2-848d-cd5a758559cb" containerName="registry-server" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.109607 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.127173 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8qjv"] Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.197561 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-utilities\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.197639 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmwm\" (UniqueName: \"kubernetes.io/projected/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-kube-api-access-ljmwm\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.197815 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-catalog-content\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.299381 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-utilities\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.299435 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmwm\" (UniqueName: \"kubernetes.io/projected/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-kube-api-access-ljmwm\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.299470 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-catalog-content\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.299876 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-utilities\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.299955 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-catalog-content\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.321536 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmwm\" (UniqueName: \"kubernetes.io/projected/fb6786e6-0316-48d7-8155-61a5ce8e9fcd-kube-api-access-ljmwm\") pod \"redhat-marketplace-c8qjv\" (UID: \"fb6786e6-0316-48d7-8155-61a5ce8e9fcd\") " pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.430846 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:16:52 crc kubenswrapper[4858]: W1205 15:16:52.953889 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb6786e6_0316_48d7_8155_61a5ce8e9fcd.slice/crio-fdd0ca7c375e6d302dd2c67e1609970bb9111a8f4256c0176de04df39b0c53ee WatchSource:0}: Error finding container fdd0ca7c375e6d302dd2c67e1609970bb9111a8f4256c0176de04df39b0c53ee: Status 404 returned error can't find the container with id fdd0ca7c375e6d302dd2c67e1609970bb9111a8f4256c0176de04df39b0c53ee Dec 05 15:16:52 crc kubenswrapper[4858]: I1205 15:16:52.960933 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8qjv"] Dec 05 15:16:53 crc kubenswrapper[4858]: I1205 15:16:53.352148 4858 generic.go:334] "Generic (PLEG): container finished" podID="fb6786e6-0316-48d7-8155-61a5ce8e9fcd" containerID="47d549b1f88c9b5d79bd30b6858f0976ee0748e491939845c4aa3622f55c61cb" exitCode=0 Dec 05 15:16:53 crc kubenswrapper[4858]: I1205 15:16:53.352230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8qjv" event={"ID":"fb6786e6-0316-48d7-8155-61a5ce8e9fcd","Type":"ContainerDied","Data":"47d549b1f88c9b5d79bd30b6858f0976ee0748e491939845c4aa3622f55c61cb"} Dec 05 15:16:53 crc kubenswrapper[4858]: I1205 15:16:53.352457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8qjv" event={"ID":"fb6786e6-0316-48d7-8155-61a5ce8e9fcd","Type":"ContainerStarted","Data":"fdd0ca7c375e6d302dd2c67e1609970bb9111a8f4256c0176de04df39b0c53ee"} Dec 05 15:16:53 crc kubenswrapper[4858]: I1205 15:16:53.899689 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:16:53 crc kubenswrapper[4858]: E1205 15:16:53.900320 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:16:57 crc kubenswrapper[4858]: I1205 15:16:57.411915 4858 generic.go:334] "Generic (PLEG): container finished" podID="fb6786e6-0316-48d7-8155-61a5ce8e9fcd" containerID="1b57aa914e18d73ed1279a62a4f75470f80b0c099bf1797f3d9c843cedae83d7" exitCode=0 Dec 05 15:16:57 crc kubenswrapper[4858]: I1205 15:16:57.411964 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8qjv" event={"ID":"fb6786e6-0316-48d7-8155-61a5ce8e9fcd","Type":"ContainerDied","Data":"1b57aa914e18d73ed1279a62a4f75470f80b0c099bf1797f3d9c843cedae83d7"} Dec 05 15:16:58 crc kubenswrapper[4858]: I1205 15:16:58.433838 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c8qjv" event={"ID":"fb6786e6-0316-48d7-8155-61a5ce8e9fcd","Type":"ContainerStarted","Data":"6e4429f690b2e2e3bc82afab4134453688a36a6a22aa596d61d8bca016191b7b"} Dec 05 15:16:58 crc kubenswrapper[4858]: I1205 15:16:58.466680 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c8qjv" podStartSLOduration=1.942618993 podStartE2EDuration="6.466655391s" podCreationTimestamp="2025-12-05 15:16:52 +0000 UTC" firstStartedPulling="2025-12-05 15:16:53.354172791 +0000 UTC m=+4821.901770930" lastFinishedPulling="2025-12-05 15:16:57.878209189 +0000 UTC m=+4826.425807328" observedRunningTime="2025-12-05 15:16:58.456309451 +0000 UTC m=+4827.003907590" watchObservedRunningTime="2025-12-05 15:16:58.466655391 +0000 UTC m=+4827.014253550" Dec 05 15:17:02 crc kubenswrapper[4858]: I1205 15:17:02.431927 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:17:02 crc kubenswrapper[4858]: I1205 15:17:02.432589 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:17:02 crc kubenswrapper[4858]: I1205 15:17:02.487796 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:17:04 crc kubenswrapper[4858]: I1205 15:17:04.901131 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:17:04 crc kubenswrapper[4858]: E1205 15:17:04.901904 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:17:12 crc kubenswrapper[4858]: I1205 15:17:12.930149 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c8qjv" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.009899 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c8qjv"] Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.067315 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fbw6"] Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.067609 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9fbw6" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="registry-server" containerID="cri-o://6b99a21c2482afc4af0fd96ee3497b0d85234becac72fe662c6b4438a4519361" gracePeriod=2 Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.576342 4858 generic.go:334] "Generic (PLEG): container finished" podID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerID="6b99a21c2482afc4af0fd96ee3497b0d85234becac72fe662c6b4438a4519361" exitCode=0 Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.576616 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fbw6" event={"ID":"9bdceab9-085a-485f-87c3-54a30f6a4b01","Type":"ContainerDied","Data":"6b99a21c2482afc4af0fd96ee3497b0d85234becac72fe662c6b4438a4519361"} Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.576773 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fbw6" event={"ID":"9bdceab9-085a-485f-87c3-54a30f6a4b01","Type":"ContainerDied","Data":"2604d4c6fa53056e60353186a148349ccd51acb992f73241128be6260cd175f2"} Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.576806 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2604d4c6fa53056e60353186a148349ccd51acb992f73241128be6260cd175f2" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.631466 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.733846 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8gls\" (UniqueName: \"kubernetes.io/projected/9bdceab9-085a-485f-87c3-54a30f6a4b01-kube-api-access-w8gls\") pod \"9bdceab9-085a-485f-87c3-54a30f6a4b01\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.734051 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-catalog-content\") pod \"9bdceab9-085a-485f-87c3-54a30f6a4b01\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.734099 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-utilities\") pod \"9bdceab9-085a-485f-87c3-54a30f6a4b01\" (UID: \"9bdceab9-085a-485f-87c3-54a30f6a4b01\") " Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.735047 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-utilities" (OuterVolumeSpecName: "utilities") pod "9bdceab9-085a-485f-87c3-54a30f6a4b01" (UID: "9bdceab9-085a-485f-87c3-54a30f6a4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.745350 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdceab9-085a-485f-87c3-54a30f6a4b01-kube-api-access-w8gls" (OuterVolumeSpecName: "kube-api-access-w8gls") pod "9bdceab9-085a-485f-87c3-54a30f6a4b01" (UID: "9bdceab9-085a-485f-87c3-54a30f6a4b01"). InnerVolumeSpecName "kube-api-access-w8gls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.762758 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bdceab9-085a-485f-87c3-54a30f6a4b01" (UID: "9bdceab9-085a-485f-87c3-54a30f6a4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.836424 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.836463 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bdceab9-085a-485f-87c3-54a30f6a4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:17:13 crc kubenswrapper[4858]: I1205 15:17:13.836475 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8gls\" (UniqueName: \"kubernetes.io/projected/9bdceab9-085a-485f-87c3-54a30f6a4b01-kube-api-access-w8gls\") on node \"crc\" DevicePath \"\"" Dec 05 15:17:14 crc kubenswrapper[4858]: I1205 15:17:14.584126 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fbw6" Dec 05 15:17:14 crc kubenswrapper[4858]: I1205 15:17:14.609728 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fbw6"] Dec 05 15:17:14 crc kubenswrapper[4858]: I1205 15:17:14.620501 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fbw6"] Dec 05 15:17:15 crc kubenswrapper[4858]: I1205 15:17:15.918162 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" path="/var/lib/kubelet/pods/9bdceab9-085a-485f-87c3-54a30f6a4b01/volumes" Dec 05 15:17:18 crc kubenswrapper[4858]: I1205 15:17:18.899210 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:17:18 crc kubenswrapper[4858]: E1205 15:17:18.899883 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:17:30 crc kubenswrapper[4858]: I1205 15:17:30.899119 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:17:30 crc kubenswrapper[4858]: E1205 15:17:30.899697 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:17:41 crc kubenswrapper[4858]: I1205 15:17:41.909507 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:17:41 crc kubenswrapper[4858]: E1205 15:17:41.910777 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:17:52 crc kubenswrapper[4858]: I1205 15:17:52.899514 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:17:53 crc kubenswrapper[4858]: I1205 15:17:53.952264 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"92e5ef05dd0de7fb2dce9cc425ea42b1dc3ec66c63fcbaaed5a40dab2f5e7ff8"} Dec 05 15:18:01 crc kubenswrapper[4858]: I1205 15:18:01.595621 4858 scope.go:117] "RemoveContainer" containerID="6b99a21c2482afc4af0fd96ee3497b0d85234becac72fe662c6b4438a4519361" Dec 05 15:18:01 crc kubenswrapper[4858]: I1205 15:18:01.630385 4858 scope.go:117] "RemoveContainer" containerID="5dba2e12b8ac13b7d672024ea501cbe184933891e15526e62424d7dae1e57d03" Dec 05 15:18:01 crc kubenswrapper[4858]: I1205 15:18:01.651290 4858 scope.go:117] "RemoveContainer" containerID="7d7aae8fbc2a9de891e3870491a51a452261f8c865568b15f03d0e60774d0206" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.322877 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wqn7s"] Dec 05 15:18:06 crc kubenswrapper[4858]: E1205 15:18:06.325211 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="registry-server" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.325235 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="registry-server" Dec 05 15:18:06 crc kubenswrapper[4858]: E1205 15:18:06.325255 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="extract-utilities" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.325266 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="extract-utilities" Dec 05 15:18:06 crc kubenswrapper[4858]: E1205 15:18:06.325279 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="extract-content" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.325286 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="extract-content" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.325608 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bdceab9-085a-485f-87c3-54a30f6a4b01" containerName="registry-server" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.329723 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.336032 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-catalog-content\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.336107 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2lf\" (UniqueName: \"kubernetes.io/projected/65124f21-7b24-4318-ab71-7a0a3e1761f8-kube-api-access-vm2lf\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.336230 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-utilities\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.341396 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqn7s"] Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.437431 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2lf\" (UniqueName: \"kubernetes.io/projected/65124f21-7b24-4318-ab71-7a0a3e1761f8-kube-api-access-vm2lf\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.437545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-utilities\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.437616 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-catalog-content\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.438084 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-catalog-content\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.438134 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-utilities\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.458780 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2lf\" (UniqueName: \"kubernetes.io/projected/65124f21-7b24-4318-ab71-7a0a3e1761f8-kube-api-access-vm2lf\") pod \"redhat-operators-wqn7s\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:06 crc kubenswrapper[4858]: I1205 15:18:06.668768 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:07 crc kubenswrapper[4858]: I1205 15:18:07.174096 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wqn7s"] Dec 05 15:18:08 crc kubenswrapper[4858]: I1205 15:18:08.069950 4858 generic.go:334] "Generic (PLEG): container finished" podID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerID="7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231" exitCode=0 Dec 05 15:18:08 crc kubenswrapper[4858]: I1205 15:18:08.070160 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerDied","Data":"7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231"} Dec 05 15:18:08 crc kubenswrapper[4858]: I1205 15:18:08.071654 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerStarted","Data":"7805d0cc680f773d70819de936893217e84e11a8f0f99c6f56260ec0bf1e2c5a"} Dec 05 15:18:08 crc kubenswrapper[4858]: I1205 15:18:08.074010 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:18:09 crc kubenswrapper[4858]: I1205 15:18:09.089423 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerStarted","Data":"bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393"} Dec 05 15:18:14 crc kubenswrapper[4858]: I1205 15:18:14.140433 4858 generic.go:334] "Generic (PLEG): container finished" podID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerID="bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393" exitCode=0 Dec 05 15:18:14 crc kubenswrapper[4858]: I1205 15:18:14.140519 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerDied","Data":"bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393"} Dec 05 15:18:15 crc kubenswrapper[4858]: I1205 15:18:15.168871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerStarted","Data":"7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620"} Dec 05 15:18:15 crc kubenswrapper[4858]: I1205 15:18:15.202482 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wqn7s" podStartSLOduration=2.6280410339999998 podStartE2EDuration="9.202454517s" podCreationTimestamp="2025-12-05 15:18:06 +0000 UTC" firstStartedPulling="2025-12-05 15:18:08.07312234 +0000 UTC m=+4896.620720489" lastFinishedPulling="2025-12-05 15:18:14.647535833 +0000 UTC m=+4903.195133972" observedRunningTime="2025-12-05 15:18:15.198170681 +0000 UTC m=+4903.745768820" watchObservedRunningTime="2025-12-05 15:18:15.202454517 +0000 UTC m=+4903.750052656" Dec 05 15:18:16 crc kubenswrapper[4858]: I1205 15:18:16.669670 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:16 crc kubenswrapper[4858]: I1205 15:18:16.669721 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:17 crc kubenswrapper[4858]: I1205 15:18:17.716440 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wqn7s" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="registry-server" probeResult="failure" output=< Dec 05 15:18:17 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:18:17 crc kubenswrapper[4858]: > Dec 05 15:18:26 crc kubenswrapper[4858]: I1205 15:18:26.718366 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:26 crc kubenswrapper[4858]: I1205 15:18:26.770643 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:26 crc kubenswrapper[4858]: I1205 15:18:26.961757 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wqn7s"] Dec 05 15:18:28 crc kubenswrapper[4858]: I1205 15:18:28.277565 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wqn7s" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="registry-server" containerID="cri-o://7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620" gracePeriod=2 Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.063201 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.190687 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-utilities\") pod \"65124f21-7b24-4318-ab71-7a0a3e1761f8\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.190796 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-catalog-content\") pod \"65124f21-7b24-4318-ab71-7a0a3e1761f8\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.190837 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2lf\" (UniqueName: \"kubernetes.io/projected/65124f21-7b24-4318-ab71-7a0a3e1761f8-kube-api-access-vm2lf\") pod \"65124f21-7b24-4318-ab71-7a0a3e1761f8\" (UID: \"65124f21-7b24-4318-ab71-7a0a3e1761f8\") " Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.191614 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-utilities" (OuterVolumeSpecName: "utilities") pod "65124f21-7b24-4318-ab71-7a0a3e1761f8" (UID: "65124f21-7b24-4318-ab71-7a0a3e1761f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.192690 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.288711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65124f21-7b24-4318-ab71-7a0a3e1761f8" (UID: "65124f21-7b24-4318-ab71-7a0a3e1761f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.291318 4858 generic.go:334] "Generic (PLEG): container finished" podID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerID="7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620" exitCode=0 Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.291367 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerDied","Data":"7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620"} Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.291396 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wqn7s" event={"ID":"65124f21-7b24-4318-ab71-7a0a3e1761f8","Type":"ContainerDied","Data":"7805d0cc680f773d70819de936893217e84e11a8f0f99c6f56260ec0bf1e2c5a"} Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.291413 4858 scope.go:117] "RemoveContainer" containerID="7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.291533 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wqn7s" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.294492 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65124f21-7b24-4318-ab71-7a0a3e1761f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.325437 4858 scope.go:117] "RemoveContainer" containerID="bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.695199 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65124f21-7b24-4318-ab71-7a0a3e1761f8-kube-api-access-vm2lf" (OuterVolumeSpecName: "kube-api-access-vm2lf") pod "65124f21-7b24-4318-ab71-7a0a3e1761f8" (UID: "65124f21-7b24-4318-ab71-7a0a3e1761f8"). InnerVolumeSpecName "kube-api-access-vm2lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.701782 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2lf\" (UniqueName: \"kubernetes.io/projected/65124f21-7b24-4318-ab71-7a0a3e1761f8-kube-api-access-vm2lf\") on node \"crc\" DevicePath \"\"" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.742372 4858 scope.go:117] "RemoveContainer" containerID="7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.822734 4858 scope.go:117] "RemoveContainer" containerID="7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620" Dec 05 15:18:29 crc kubenswrapper[4858]: E1205 15:18:29.823260 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620\": container with ID starting with 7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620 not found: ID does not exist" containerID="7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.823311 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620"} err="failed to get container status \"7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620\": rpc error: code = NotFound desc = could not find container \"7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620\": container with ID starting with 7162fc2c6585fffe70f43295e98268e734a9aa43a007ba1808ab0aa97b251620 not found: ID does not exist" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.823342 4858 scope.go:117] "RemoveContainer" containerID="bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393" Dec 05 15:18:29 crc kubenswrapper[4858]: E1205 15:18:29.823590 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393\": container with ID starting with bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393 not found: ID does not exist" containerID="bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.823617 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393"} err="failed to get container status \"bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393\": rpc error: code = NotFound desc = could not find container \"bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393\": container with ID starting with bcf60a9e2de52d7acb199d27eb10aa2366ac9f2a176730c6b67168d6f32e7393 not found: ID does not exist" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.823634 4858 scope.go:117] "RemoveContainer" containerID="7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231" Dec 05 15:18:29 crc kubenswrapper[4858]: E1205 15:18:29.823921 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231\": container with ID starting with 7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231 not found: ID does not exist" containerID="7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.823949 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231"} err="failed to get container status \"7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231\": rpc error: code = NotFound desc = could not find container \"7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231\": container with ID starting with 7a01d90739c253bcbcb0d5b18d6c6fe4777a776ef6a0729719febf32ffcbb231 not found: ID does not exist" Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.930833 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wqn7s"] Dec 05 15:18:29 crc kubenswrapper[4858]: I1205 15:18:29.940880 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wqn7s"] Dec 05 15:18:31 crc kubenswrapper[4858]: I1205 15:18:31.949095 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" path="/var/lib/kubelet/pods/65124f21-7b24-4318-ab71-7a0a3e1761f8/volumes" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.421485 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.429135 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.424777 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-l27jv" podUID="521a1948-1758-4148-be85-f3d91f04aac9" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.42:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.495306 4858 patch_prober.go:28] interesting pod/console-85b6884698-jg67f container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.495369 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-85b6884698-jg67f" podUID="edd4d801-d89a-48f7-a598-9011f83ceefd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.32:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.500723 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="34c521aa-4339-4571-9168-f2939e083ea5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.203:8080/livez\": context deadline exceeded" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.522321 4858 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-h4k5m container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.522555 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h4k5m" podUID="db8cbc4d-eadf-4949-9b00-760f67bd0442" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.526406 4858 patch_prober.go:28] interesting pod/nmstate-webhook-5f6d4c5ccb-mz5j7 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.27:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.526608 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-mz5j7" podUID="4b3d39ce-7f49-470b-af52-6895f872f60d" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.27:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.387463 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="34c521aa-4339-4571-9168-f2939e083ea5" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.203:8081/readyz\": context deadline exceeded" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.544015 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-5655c58dd6-5mx92" podUID="cb76164b-d338-4395-af71-e6dd098c165f" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.554677 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" podUID="a181bba4-2682-4d6a-90cc-12bea5e07d34" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.555649 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rbddp" podUID="5401bf83-09b5-464f-b52c-210a3fa92aa1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.444052 4858 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:19:13 crc kubenswrapper[4858]: E1205 15:19:13.566429 4858 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.574857 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-hvgl6" podUID="aa187928-b3b8-40e6-b60b-19d84781e34c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.574940 4858 patch_prober.go:28] interesting pod/dns-default-5c95q container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.36:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.574982 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-5c95q" podUID="95eba5b0-94bb-4594-a49e-ca21538ef39d" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.36:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.575388 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.576125 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.576155 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hh2rc" podUID="a181bba4-2682-4d6a-90cc-12bea5e07d34" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.576858 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: E1205 15:19:13.584036 4858 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": the object has been modified; please apply your changes to the latest version and try again" Dec 05 15:19:13 crc kubenswrapper[4858]: I1205 15:19:13.621355 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-756vt" podUID="9a3a124e-0ac1-4f2a-aee6-3cae0fd66576" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:19:13 crc kubenswrapper[4858]: E1205 15:19:13.658569 4858 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.759s" Dec 05 15:20:13 crc kubenswrapper[4858]: E1205 15:20:13.161119 4858 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.174:52022->38.102.83.174:41641: write tcp 38.102.83.174:52022->38.102.83.174:41641: write: broken pipe Dec 05 15:20:14 crc kubenswrapper[4858]: I1205 15:20:14.759778 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:20:14 crc kubenswrapper[4858]: I1205 15:20:14.760263 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:20:44 crc kubenswrapper[4858]: I1205 15:20:44.760342 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:20:44 crc kubenswrapper[4858]: I1205 15:20:44.760869 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.760009 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.760483 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.760530 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.761336 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92e5ef05dd0de7fb2dce9cc425ea42b1dc3ec66c63fcbaaed5a40dab2f5e7ff8"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.761389 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://92e5ef05dd0de7fb2dce9cc425ea42b1dc3ec66c63fcbaaed5a40dab2f5e7ff8" gracePeriod=600 Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.994298 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="92e5ef05dd0de7fb2dce9cc425ea42b1dc3ec66c63fcbaaed5a40dab2f5e7ff8" exitCode=0 Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.994375 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"92e5ef05dd0de7fb2dce9cc425ea42b1dc3ec66c63fcbaaed5a40dab2f5e7ff8"} Dec 05 15:21:14 crc kubenswrapper[4858]: I1205 15:21:14.994629 4858 scope.go:117] "RemoveContainer" containerID="0b3153ee1dc2d8b5928e06a0386a98814dd8922e37455d7fbdd53059c9fe1b55" Dec 05 15:21:16 crc kubenswrapper[4858]: I1205 15:21:16.016199 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00"} Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.445039 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57v2x"] Dec 05 15:23:28 crc kubenswrapper[4858]: E1205 15:23:28.445997 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="registry-server" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.446012 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="registry-server" Dec 05 15:23:28 crc kubenswrapper[4858]: E1205 15:23:28.446042 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="extract-utilities" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.446048 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="extract-utilities" Dec 05 15:23:28 crc kubenswrapper[4858]: E1205 15:23:28.446071 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="extract-content" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.446078 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="extract-content" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.446315 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="65124f21-7b24-4318-ab71-7a0a3e1761f8" containerName="registry-server" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.449490 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.481894 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57v2x"] Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.583473 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-utilities\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.584743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287md\" (UniqueName: \"kubernetes.io/projected/a5a7f2af-3c04-4448-8718-1623934b248b-kube-api-access-287md\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.585089 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-catalog-content\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.687551 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-utilities\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.687708 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287md\" (UniqueName: \"kubernetes.io/projected/a5a7f2af-3c04-4448-8718-1623934b248b-kube-api-access-287md\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.687815 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-catalog-content\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.688336 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-catalog-content\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.688605 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-utilities\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.710623 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287md\" (UniqueName: \"kubernetes.io/projected/a5a7f2af-3c04-4448-8718-1623934b248b-kube-api-access-287md\") pod \"certified-operators-57v2x\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:28 crc kubenswrapper[4858]: I1205 15:23:28.768639 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:29 crc kubenswrapper[4858]: I1205 15:23:29.325868 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57v2x"] Dec 05 15:23:30 crc kubenswrapper[4858]: I1205 15:23:30.184944 4858 generic.go:334] "Generic (PLEG): container finished" podID="a5a7f2af-3c04-4448-8718-1623934b248b" containerID="2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b" exitCode=0 Dec 05 15:23:30 crc kubenswrapper[4858]: I1205 15:23:30.185217 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerDied","Data":"2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b"} Dec 05 15:23:30 crc kubenswrapper[4858]: I1205 15:23:30.185263 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerStarted","Data":"a00ae3ffb35e29b98e494788e4eac263053c53472843f1999e99c85a8fc26af3"} Dec 05 15:23:30 crc kubenswrapper[4858]: I1205 15:23:30.194723 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:23:31 crc kubenswrapper[4858]: I1205 15:23:31.196967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerStarted","Data":"5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558"} Dec 05 15:23:33 crc kubenswrapper[4858]: I1205 15:23:33.216430 4858 generic.go:334] "Generic (PLEG): container finished" podID="a5a7f2af-3c04-4448-8718-1623934b248b" containerID="5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558" exitCode=0 Dec 05 15:23:33 crc kubenswrapper[4858]: I1205 15:23:33.216524 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerDied","Data":"5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558"} Dec 05 15:23:34 crc kubenswrapper[4858]: I1205 15:23:34.227272 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerStarted","Data":"27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942"} Dec 05 15:23:38 crc kubenswrapper[4858]: I1205 15:23:38.769088 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:38 crc kubenswrapper[4858]: I1205 15:23:38.770497 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:38 crc kubenswrapper[4858]: I1205 15:23:38.825671 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:38 crc kubenswrapper[4858]: I1205 15:23:38.854105 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-57v2x" podStartSLOduration=7.276358179 podStartE2EDuration="10.854083045s" podCreationTimestamp="2025-12-05 15:23:28 +0000 UTC" firstStartedPulling="2025-12-05 15:23:30.194298576 +0000 UTC m=+5218.741896715" lastFinishedPulling="2025-12-05 15:23:33.772023442 +0000 UTC m=+5222.319621581" observedRunningTime="2025-12-05 15:23:34.25135755 +0000 UTC m=+5222.798955699" watchObservedRunningTime="2025-12-05 15:23:38.854083045 +0000 UTC m=+5227.401681184" Dec 05 15:23:39 crc kubenswrapper[4858]: I1205 15:23:39.354453 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:39 crc kubenswrapper[4858]: I1205 15:23:39.455454 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57v2x"] Dec 05 15:23:41 crc kubenswrapper[4858]: I1205 15:23:41.304969 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-57v2x" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="registry-server" containerID="cri-o://27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942" gracePeriod=2 Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.028623 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.077682 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-utilities\") pod \"a5a7f2af-3c04-4448-8718-1623934b248b\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.077847 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-287md\" (UniqueName: \"kubernetes.io/projected/a5a7f2af-3c04-4448-8718-1623934b248b-kube-api-access-287md\") pod \"a5a7f2af-3c04-4448-8718-1623934b248b\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.077891 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-catalog-content\") pod \"a5a7f2af-3c04-4448-8718-1623934b248b\" (UID: \"a5a7f2af-3c04-4448-8718-1623934b248b\") " Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.078625 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-utilities" (OuterVolumeSpecName: "utilities") pod "a5a7f2af-3c04-4448-8718-1623934b248b" (UID: "a5a7f2af-3c04-4448-8718-1623934b248b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.089096 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a7f2af-3c04-4448-8718-1623934b248b-kube-api-access-287md" (OuterVolumeSpecName: "kube-api-access-287md") pod "a5a7f2af-3c04-4448-8718-1623934b248b" (UID: "a5a7f2af-3c04-4448-8718-1623934b248b"). InnerVolumeSpecName "kube-api-access-287md". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.136469 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5a7f2af-3c04-4448-8718-1623934b248b" (UID: "a5a7f2af-3c04-4448-8718-1623934b248b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.180909 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.180952 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-287md\" (UniqueName: \"kubernetes.io/projected/a5a7f2af-3c04-4448-8718-1623934b248b-kube-api-access-287md\") on node \"crc\" DevicePath \"\"" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.180964 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a7f2af-3c04-4448-8718-1623934b248b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.314149 4858 generic.go:334] "Generic (PLEG): container finished" podID="a5a7f2af-3c04-4448-8718-1623934b248b" containerID="27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942" exitCode=0 Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.314188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerDied","Data":"27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942"} Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.314213 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57v2x" event={"ID":"a5a7f2af-3c04-4448-8718-1623934b248b","Type":"ContainerDied","Data":"a00ae3ffb35e29b98e494788e4eac263053c53472843f1999e99c85a8fc26af3"} Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.314232 4858 scope.go:117] "RemoveContainer" containerID="27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.314365 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57v2x" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.341511 4858 scope.go:117] "RemoveContainer" containerID="5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.358479 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57v2x"] Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.367691 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-57v2x"] Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.370463 4858 scope.go:117] "RemoveContainer" containerID="2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.411318 4858 scope.go:117] "RemoveContainer" containerID="27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942" Dec 05 15:23:42 crc kubenswrapper[4858]: E1205 15:23:42.411661 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942\": container with ID starting with 27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942 not found: ID does not exist" containerID="27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.411692 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942"} err="failed to get container status \"27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942\": rpc error: code = NotFound desc = could not find container \"27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942\": container with ID starting with 27e42443c5971da913dfc8e062c70130cbd7c4653d0a0d847bf4fd6cb4610942 not found: ID does not exist" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.411716 4858 scope.go:117] "RemoveContainer" containerID="5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558" Dec 05 15:23:42 crc kubenswrapper[4858]: E1205 15:23:42.412070 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558\": container with ID starting with 5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558 not found: ID does not exist" containerID="5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.412093 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558"} err="failed to get container status \"5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558\": rpc error: code = NotFound desc = could not find container \"5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558\": container with ID starting with 5d768ebde962f24b108a5d503807e5a602142ce0a3ce9121eec5fa241a78c558 not found: ID does not exist" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.412106 4858 scope.go:117] "RemoveContainer" containerID="2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b" Dec 05 15:23:42 crc kubenswrapper[4858]: E1205 15:23:42.412718 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b\": container with ID starting with 2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b not found: ID does not exist" containerID="2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b" Dec 05 15:23:42 crc kubenswrapper[4858]: I1205 15:23:42.412768 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b"} err="failed to get container status \"2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b\": rpc error: code = NotFound desc = could not find container \"2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b\": container with ID starting with 2e60fedf21e7a778798eccce1c68dd0036829ba11d8b2ee6eaa59856a94ec53b not found: ID does not exist" Dec 05 15:23:43 crc kubenswrapper[4858]: I1205 15:23:43.909895 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" path="/var/lib/kubelet/pods/a5a7f2af-3c04-4448-8718-1623934b248b/volumes" Dec 05 15:23:44 crc kubenswrapper[4858]: I1205 15:23:44.759900 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:23:44 crc kubenswrapper[4858]: I1205 15:23:44.759973 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:24:14 crc kubenswrapper[4858]: I1205 15:24:14.765553 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:24:14 crc kubenswrapper[4858]: I1205 15:24:14.766203 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:24:44 crc kubenswrapper[4858]: I1205 15:24:44.760273 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:24:44 crc kubenswrapper[4858]: I1205 15:24:44.760725 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:24:44 crc kubenswrapper[4858]: I1205 15:24:44.760773 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:24:44 crc kubenswrapper[4858]: I1205 15:24:44.761517 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:24:44 crc kubenswrapper[4858]: I1205 15:24:44.761561 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" gracePeriod=600 Dec 05 15:24:44 crc kubenswrapper[4858]: E1205 15:24:44.887465 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:24:45 crc kubenswrapper[4858]: I1205 15:24:45.845235 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" exitCode=0 Dec 05 15:24:45 crc kubenswrapper[4858]: I1205 15:24:45.845282 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00"} Dec 05 15:24:45 crc kubenswrapper[4858]: I1205 15:24:45.845313 4858 scope.go:117] "RemoveContainer" containerID="92e5ef05dd0de7fb2dce9cc425ea42b1dc3ec66c63fcbaaed5a40dab2f5e7ff8" Dec 05 15:24:45 crc kubenswrapper[4858]: I1205 15:24:45.846037 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:24:45 crc kubenswrapper[4858]: E1205 15:24:45.846327 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:24:59 crc kubenswrapper[4858]: I1205 15:24:59.899612 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:24:59 crc kubenswrapper[4858]: E1205 15:24:59.901419 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:25:12 crc kubenswrapper[4858]: I1205 15:25:12.899359 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:25:12 crc kubenswrapper[4858]: E1205 15:25:12.900162 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:25:24 crc kubenswrapper[4858]: I1205 15:25:24.899677 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:25:24 crc kubenswrapper[4858]: E1205 15:25:24.900380 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:25:37 crc kubenswrapper[4858]: I1205 15:25:37.899463 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:25:37 crc kubenswrapper[4858]: E1205 15:25:37.900457 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:25:51 crc kubenswrapper[4858]: I1205 15:25:51.905624 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:25:51 crc kubenswrapper[4858]: E1205 15:25:51.907249 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:26:02 crc kubenswrapper[4858]: I1205 15:26:02.899772 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:26:02 crc kubenswrapper[4858]: E1205 15:26:02.900425 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:26:17 crc kubenswrapper[4858]: I1205 15:26:17.900300 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:26:17 crc kubenswrapper[4858]: E1205 15:26:17.901719 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:26:30 crc kubenswrapper[4858]: I1205 15:26:30.899273 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:26:30 crc kubenswrapper[4858]: E1205 15:26:30.900287 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:26:43 crc kubenswrapper[4858]: I1205 15:26:43.900025 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:26:43 crc kubenswrapper[4858]: E1205 15:26:43.900686 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:26:56 crc kubenswrapper[4858]: I1205 15:26:56.898933 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:26:56 crc kubenswrapper[4858]: E1205 15:26:56.899691 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.119554 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6lltz"] Dec 05 15:26:58 crc kubenswrapper[4858]: E1205 15:26:58.120031 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="registry-server" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.120050 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="registry-server" Dec 05 15:26:58 crc kubenswrapper[4858]: E1205 15:26:58.120072 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="extract-utilities" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.120078 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="extract-utilities" Dec 05 15:26:58 crc kubenswrapper[4858]: E1205 15:26:58.120097 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="extract-content" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.120103 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="extract-content" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.120274 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5a7f2af-3c04-4448-8718-1623934b248b" containerName="registry-server" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.121995 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.143349 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6lltz"] Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.247899 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-utilities\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.247991 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-catalog-content\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.248021 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrd95\" (UniqueName: \"kubernetes.io/projected/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-kube-api-access-mrd95\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.349967 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-catalog-content\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.350023 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrd95\" (UniqueName: \"kubernetes.io/projected/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-kube-api-access-mrd95\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.350153 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-utilities\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.350411 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-catalog-content\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.350533 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-utilities\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.369643 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrd95\" (UniqueName: \"kubernetes.io/projected/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-kube-api-access-mrd95\") pod \"community-operators-6lltz\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:58 crc kubenswrapper[4858]: I1205 15:26:58.445337 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:26:59 crc kubenswrapper[4858]: I1205 15:26:59.177570 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6lltz"] Dec 05 15:26:59 crc kubenswrapper[4858]: W1205 15:26:59.189384 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae97ff5d_0703_43fb_8a64_3fbdea7d09f6.slice/crio-97b3f6db91200feec334f39a38d1d2506f3ad0f31923a3ab8dfbe20d7178cef2 WatchSource:0}: Error finding container 97b3f6db91200feec334f39a38d1d2506f3ad0f31923a3ab8dfbe20d7178cef2: Status 404 returned error can't find the container with id 97b3f6db91200feec334f39a38d1d2506f3ad0f31923a3ab8dfbe20d7178cef2 Dec 05 15:27:00 crc kubenswrapper[4858]: I1205 15:27:00.036992 4858 generic.go:334] "Generic (PLEG): container finished" podID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerID="c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9" exitCode=0 Dec 05 15:27:00 crc kubenswrapper[4858]: I1205 15:27:00.037061 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerDied","Data":"c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9"} Dec 05 15:27:00 crc kubenswrapper[4858]: I1205 15:27:00.037269 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerStarted","Data":"97b3f6db91200feec334f39a38d1d2506f3ad0f31923a3ab8dfbe20d7178cef2"} Dec 05 15:27:02 crc kubenswrapper[4858]: I1205 15:27:02.054557 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerStarted","Data":"64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05"} Dec 05 15:27:04 crc kubenswrapper[4858]: I1205 15:27:04.073430 4858 generic.go:334] "Generic (PLEG): container finished" podID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerID="64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05" exitCode=0 Dec 05 15:27:04 crc kubenswrapper[4858]: I1205 15:27:04.073500 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerDied","Data":"64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05"} Dec 05 15:27:05 crc kubenswrapper[4858]: I1205 15:27:05.296636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerStarted","Data":"9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7"} Dec 05 15:27:07 crc kubenswrapper[4858]: I1205 15:27:07.898983 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:27:07 crc kubenswrapper[4858]: E1205 15:27:07.899638 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:27:08 crc kubenswrapper[4858]: I1205 15:27:08.446212 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:27:08 crc kubenswrapper[4858]: I1205 15:27:08.446265 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:27:09 crc kubenswrapper[4858]: I1205 15:27:09.532088 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6lltz" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="registry-server" probeResult="failure" output=< Dec 05 15:27:09 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:27:09 crc kubenswrapper[4858]: > Dec 05 15:27:18 crc kubenswrapper[4858]: I1205 15:27:18.530861 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:27:18 crc kubenswrapper[4858]: I1205 15:27:18.555683 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6lltz" podStartSLOduration=16.090740321 podStartE2EDuration="20.555663348s" podCreationTimestamp="2025-12-05 15:26:58 +0000 UTC" firstStartedPulling="2025-12-05 15:27:00.039047838 +0000 UTC m=+5428.586645977" lastFinishedPulling="2025-12-05 15:27:04.503970865 +0000 UTC m=+5433.051569004" observedRunningTime="2025-12-05 15:27:05.338214013 +0000 UTC m=+5433.885812152" watchObservedRunningTime="2025-12-05 15:27:18.555663348 +0000 UTC m=+5447.103261487" Dec 05 15:27:18 crc kubenswrapper[4858]: I1205 15:27:18.596994 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:27:18 crc kubenswrapper[4858]: I1205 15:27:18.766120 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6lltz"] Dec 05 15:27:20 crc kubenswrapper[4858]: I1205 15:27:20.413767 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6lltz" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="registry-server" containerID="cri-o://9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7" gracePeriod=2 Dec 05 15:27:20 crc kubenswrapper[4858]: I1205 15:27:20.898770 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:27:20 crc kubenswrapper[4858]: E1205 15:27:20.899381 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:27:20 crc kubenswrapper[4858]: I1205 15:27:20.989972 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.079460 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-catalog-content\") pod \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.079790 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrd95\" (UniqueName: \"kubernetes.io/projected/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-kube-api-access-mrd95\") pod \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.079960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-utilities\") pod \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\" (UID: \"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6\") " Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.080585 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-utilities" (OuterVolumeSpecName: "utilities") pod "ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" (UID: "ae97ff5d-0703-43fb-8a64-3fbdea7d09f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.101038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-kube-api-access-mrd95" (OuterVolumeSpecName: "kube-api-access-mrd95") pod "ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" (UID: "ae97ff5d-0703-43fb-8a64-3fbdea7d09f6"). InnerVolumeSpecName "kube-api-access-mrd95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.135711 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" (UID: "ae97ff5d-0703-43fb-8a64-3fbdea7d09f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.182548 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.182801 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrd95\" (UniqueName: \"kubernetes.io/projected/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-kube-api-access-mrd95\") on node \"crc\" DevicePath \"\"" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.182905 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.435284 4858 generic.go:334] "Generic (PLEG): container finished" podID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerID="9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7" exitCode=0 Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.435382 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerDied","Data":"9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7"} Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.435392 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6lltz" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.436180 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6lltz" event={"ID":"ae97ff5d-0703-43fb-8a64-3fbdea7d09f6","Type":"ContainerDied","Data":"97b3f6db91200feec334f39a38d1d2506f3ad0f31923a3ab8dfbe20d7178cef2"} Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.436262 4858 scope.go:117] "RemoveContainer" containerID="9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.480878 4858 scope.go:117] "RemoveContainer" containerID="64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.481298 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6lltz"] Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.494318 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6lltz"] Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.524937 4858 scope.go:117] "RemoveContainer" containerID="c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.573376 4858 scope.go:117] "RemoveContainer" containerID="9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7" Dec 05 15:27:21 crc kubenswrapper[4858]: E1205 15:27:21.573853 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7\": container with ID starting with 9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7 not found: ID does not exist" containerID="9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.573881 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7"} err="failed to get container status \"9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7\": rpc error: code = NotFound desc = could not find container \"9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7\": container with ID starting with 9de5f0330392d040ebd0996ef1cc3e1a63ebd36c3f7b9207e9e932045c0e1dd7 not found: ID does not exist" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.573901 4858 scope.go:117] "RemoveContainer" containerID="64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05" Dec 05 15:27:21 crc kubenswrapper[4858]: E1205 15:27:21.574192 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05\": container with ID starting with 64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05 not found: ID does not exist" containerID="64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.574208 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05"} err="failed to get container status \"64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05\": rpc error: code = NotFound desc = could not find container \"64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05\": container with ID starting with 64b4be3510068f097ce800e4ec0f2b076e55fc7ea50c9f80ae5613d2da26dc05 not found: ID does not exist" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.574220 4858 scope.go:117] "RemoveContainer" containerID="c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9" Dec 05 15:27:21 crc kubenswrapper[4858]: E1205 15:27:21.574560 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9\": container with ID starting with c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9 not found: ID does not exist" containerID="c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.574651 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9"} err="failed to get container status \"c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9\": rpc error: code = NotFound desc = could not find container \"c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9\": container with ID starting with c2c79ddd551939c187aa3286fe6b21b4226a5969d61e4648662f0be85bac4be9 not found: ID does not exist" Dec 05 15:27:21 crc kubenswrapper[4858]: I1205 15:27:21.910260 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" path="/var/lib/kubelet/pods/ae97ff5d-0703-43fb-8a64-3fbdea7d09f6/volumes" Dec 05 15:27:34 crc kubenswrapper[4858]: I1205 15:27:34.900461 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:27:34 crc kubenswrapper[4858]: E1205 15:27:34.901344 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:27:45 crc kubenswrapper[4858]: I1205 15:27:45.899205 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:27:45 crc kubenswrapper[4858]: E1205 15:27:45.899960 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:27:56 crc kubenswrapper[4858]: I1205 15:27:56.899575 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:27:56 crc kubenswrapper[4858]: E1205 15:27:56.900209 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:28:07 crc kubenswrapper[4858]: I1205 15:28:07.899980 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:28:07 crc kubenswrapper[4858]: E1205 15:28:07.900682 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:28:18 crc kubenswrapper[4858]: I1205 15:28:18.899957 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:28:18 crc kubenswrapper[4858]: E1205 15:28:18.900766 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:28:32 crc kubenswrapper[4858]: I1205 15:28:32.900001 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:28:32 crc kubenswrapper[4858]: E1205 15:28:32.900953 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:28:44 crc kubenswrapper[4858]: I1205 15:28:44.899505 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:28:44 crc kubenswrapper[4858]: E1205 15:28:44.900402 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:28:58 crc kubenswrapper[4858]: I1205 15:28:58.900077 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:28:58 crc kubenswrapper[4858]: E1205 15:28:58.900778 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.305392 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k4qn4"] Dec 05 15:29:00 crc kubenswrapper[4858]: E1205 15:29:00.306463 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="extract-content" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.306477 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="extract-content" Dec 05 15:29:00 crc kubenswrapper[4858]: E1205 15:29:00.306486 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="extract-utilities" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.306492 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="extract-utilities" Dec 05 15:29:00 crc kubenswrapper[4858]: E1205 15:29:00.306504 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="registry-server" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.306510 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="registry-server" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.306681 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae97ff5d-0703-43fb-8a64-3fbdea7d09f6" containerName="registry-server" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.308063 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.325349 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4qn4"] Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.363388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-catalog-content\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.363508 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/177b99c7-d5b2-494d-a932-9ba7a9acdfec-kube-api-access-k9vmt\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.363562 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-utilities\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.465687 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-catalog-content\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.465780 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/177b99c7-d5b2-494d-a932-9ba7a9acdfec-kube-api-access-k9vmt\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.465810 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-utilities\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.466262 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-catalog-content\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.466276 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-utilities\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.486687 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/177b99c7-d5b2-494d-a932-9ba7a9acdfec-kube-api-access-k9vmt\") pod \"redhat-marketplace-k4qn4\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:00 crc kubenswrapper[4858]: I1205 15:29:00.633341 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.131669 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4qn4"] Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.340625 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerStarted","Data":"f285087a430b664ffdaf20494a20e17dd33f623ecd2b7fd081af24ac19373d82"} Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.708064 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cw5mr"] Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.712614 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.740206 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cw5mr"] Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.790500 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-utilities\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.790806 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-catalog-content\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.791000 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg5zq\" (UniqueName: \"kubernetes.io/projected/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-kube-api-access-hg5zq\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.893201 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-utilities\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.893610 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-catalog-content\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.893658 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg5zq\" (UniqueName: \"kubernetes.io/projected/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-kube-api-access-hg5zq\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.893765 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-utilities\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.894039 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-catalog-content\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:01 crc kubenswrapper[4858]: I1205 15:29:01.922709 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg5zq\" (UniqueName: \"kubernetes.io/projected/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-kube-api-access-hg5zq\") pod \"redhat-operators-cw5mr\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:02 crc kubenswrapper[4858]: I1205 15:29:02.035519 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:03 crc kubenswrapper[4858]: I1205 15:29:03.264881 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cw5mr"] Dec 05 15:29:03 crc kubenswrapper[4858]: I1205 15:29:03.363063 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerStarted","Data":"20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52"} Dec 05 15:29:03 crc kubenswrapper[4858]: I1205 15:29:03.364465 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerStarted","Data":"c30bd0a0a4c5de1f7f68f149b5bef44fa1d5199ba95185afc313423fb7f1c4b8"} Dec 05 15:29:04 crc kubenswrapper[4858]: I1205 15:29:04.400305 4858 generic.go:334] "Generic (PLEG): container finished" podID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerID="20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52" exitCode=0 Dec 05 15:29:04 crc kubenswrapper[4858]: I1205 15:29:04.400412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerDied","Data":"20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52"} Dec 05 15:29:04 crc kubenswrapper[4858]: I1205 15:29:04.403489 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:29:04 crc kubenswrapper[4858]: I1205 15:29:04.404534 4858 generic.go:334] "Generic (PLEG): container finished" podID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerID="178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73" exitCode=0 Dec 05 15:29:04 crc kubenswrapper[4858]: I1205 15:29:04.404570 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerDied","Data":"178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73"} Dec 05 15:29:09 crc kubenswrapper[4858]: I1205 15:29:09.453595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerStarted","Data":"6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802"} Dec 05 15:29:09 crc kubenswrapper[4858]: I1205 15:29:09.459313 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerStarted","Data":"6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2"} Dec 05 15:29:11 crc kubenswrapper[4858]: I1205 15:29:11.482557 4858 generic.go:334] "Generic (PLEG): container finished" podID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerID="6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2" exitCode=0 Dec 05 15:29:11 crc kubenswrapper[4858]: I1205 15:29:11.482782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerDied","Data":"6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2"} Dec 05 15:29:11 crc kubenswrapper[4858]: I1205 15:29:11.908559 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:29:11 crc kubenswrapper[4858]: E1205 15:29:11.909588 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:29:13 crc kubenswrapper[4858]: I1205 15:29:13.507230 4858 generic.go:334] "Generic (PLEG): container finished" podID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerID="6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802" exitCode=0 Dec 05 15:29:13 crc kubenswrapper[4858]: I1205 15:29:13.507328 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerDied","Data":"6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802"} Dec 05 15:29:14 crc kubenswrapper[4858]: I1205 15:29:14.519869 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerStarted","Data":"98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517"} Dec 05 15:29:14 crc kubenswrapper[4858]: I1205 15:29:14.527583 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerStarted","Data":"4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df"} Dec 05 15:29:14 crc kubenswrapper[4858]: I1205 15:29:14.545700 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k4qn4" podStartSLOduration=5.250425057 podStartE2EDuration="14.545681757s" podCreationTimestamp="2025-12-05 15:29:00 +0000 UTC" firstStartedPulling="2025-12-05 15:29:04.403008542 +0000 UTC m=+5552.950606681" lastFinishedPulling="2025-12-05 15:29:13.698265242 +0000 UTC m=+5562.245863381" observedRunningTime="2025-12-05 15:29:14.540065085 +0000 UTC m=+5563.087663244" watchObservedRunningTime="2025-12-05 15:29:14.545681757 +0000 UTC m=+5563.093279896" Dec 05 15:29:14 crc kubenswrapper[4858]: I1205 15:29:14.568616 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cw5mr" podStartSLOduration=3.745364023 podStartE2EDuration="13.568599896s" podCreationTimestamp="2025-12-05 15:29:01 +0000 UTC" firstStartedPulling="2025-12-05 15:29:04.406083465 +0000 UTC m=+5552.953681604" lastFinishedPulling="2025-12-05 15:29:14.229319338 +0000 UTC m=+5562.776917477" observedRunningTime="2025-12-05 15:29:14.56836056 +0000 UTC m=+5563.115958719" watchObservedRunningTime="2025-12-05 15:29:14.568599896 +0000 UTC m=+5563.116198035" Dec 05 15:29:20 crc kubenswrapper[4858]: I1205 15:29:20.633547 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:20 crc kubenswrapper[4858]: I1205 15:29:20.634327 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:20 crc kubenswrapper[4858]: I1205 15:29:20.695420 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:21 crc kubenswrapper[4858]: I1205 15:29:21.633315 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:21 crc kubenswrapper[4858]: I1205 15:29:21.686194 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4qn4"] Dec 05 15:29:22 crc kubenswrapper[4858]: I1205 15:29:22.035908 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:22 crc kubenswrapper[4858]: I1205 15:29:22.036048 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:23 crc kubenswrapper[4858]: I1205 15:29:23.081449 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cw5mr" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="registry-server" probeResult="failure" output=< Dec 05 15:29:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:29:23 crc kubenswrapper[4858]: > Dec 05 15:29:23 crc kubenswrapper[4858]: I1205 15:29:23.601922 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k4qn4" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="registry-server" containerID="cri-o://98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517" gracePeriod=2 Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.165587 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.262727 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-catalog-content\") pod \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.263019 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/177b99c7-d5b2-494d-a932-9ba7a9acdfec-kube-api-access-k9vmt\") pod \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.263354 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-utilities\") pod \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\" (UID: \"177b99c7-d5b2-494d-a932-9ba7a9acdfec\") " Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.264566 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-utilities" (OuterVolumeSpecName: "utilities") pod "177b99c7-d5b2-494d-a932-9ba7a9acdfec" (UID: "177b99c7-d5b2-494d-a932-9ba7a9acdfec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.269130 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/177b99c7-d5b2-494d-a932-9ba7a9acdfec-kube-api-access-k9vmt" (OuterVolumeSpecName: "kube-api-access-k9vmt") pod "177b99c7-d5b2-494d-a932-9ba7a9acdfec" (UID: "177b99c7-d5b2-494d-a932-9ba7a9acdfec"). InnerVolumeSpecName "kube-api-access-k9vmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.288726 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "177b99c7-d5b2-494d-a932-9ba7a9acdfec" (UID: "177b99c7-d5b2-494d-a932-9ba7a9acdfec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.365991 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.366025 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/177b99c7-d5b2-494d-a932-9ba7a9acdfec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.366041 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9vmt\" (UniqueName: \"kubernetes.io/projected/177b99c7-d5b2-494d-a932-9ba7a9acdfec-kube-api-access-k9vmt\") on node \"crc\" DevicePath \"\"" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.614256 4858 generic.go:334] "Generic (PLEG): container finished" podID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerID="98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517" exitCode=0 Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.614314 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerDied","Data":"98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517"} Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.614324 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k4qn4" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.614340 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k4qn4" event={"ID":"177b99c7-d5b2-494d-a932-9ba7a9acdfec","Type":"ContainerDied","Data":"f285087a430b664ffdaf20494a20e17dd33f623ecd2b7fd081af24ac19373d82"} Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.614357 4858 scope.go:117] "RemoveContainer" containerID="98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.650072 4858 scope.go:117] "RemoveContainer" containerID="6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.657881 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4qn4"] Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.668287 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k4qn4"] Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.685609 4858 scope.go:117] "RemoveContainer" containerID="20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.718956 4858 scope.go:117] "RemoveContainer" containerID="98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517" Dec 05 15:29:24 crc kubenswrapper[4858]: E1205 15:29:24.719459 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517\": container with ID starting with 98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517 not found: ID does not exist" containerID="98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.719490 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517"} err="failed to get container status \"98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517\": rpc error: code = NotFound desc = could not find container \"98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517\": container with ID starting with 98fb081982ce3c272860f53817068abbffd104cdf80bd00ae9736f52b3e65517 not found: ID does not exist" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.719512 4858 scope.go:117] "RemoveContainer" containerID="6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2" Dec 05 15:29:24 crc kubenswrapper[4858]: E1205 15:29:24.719841 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2\": container with ID starting with 6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2 not found: ID does not exist" containerID="6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.719885 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2"} err="failed to get container status \"6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2\": rpc error: code = NotFound desc = could not find container \"6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2\": container with ID starting with 6bbef2848254540ffe70ac8e796212ff863479266a9363510f0ac5b34fb71eb2 not found: ID does not exist" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.719920 4858 scope.go:117] "RemoveContainer" containerID="20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52" Dec 05 15:29:24 crc kubenswrapper[4858]: E1205 15:29:24.720348 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52\": container with ID starting with 20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52 not found: ID does not exist" containerID="20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52" Dec 05 15:29:24 crc kubenswrapper[4858]: I1205 15:29:24.720372 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52"} err="failed to get container status \"20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52\": rpc error: code = NotFound desc = could not find container \"20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52\": container with ID starting with 20620776ff6aa789923c0f69b92e51ffa10f5cee0fe31be56ffa02620714aa52 not found: ID does not exist" Dec 05 15:29:25 crc kubenswrapper[4858]: I1205 15:29:25.901420 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:29:25 crc kubenswrapper[4858]: E1205 15:29:25.902000 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:29:25 crc kubenswrapper[4858]: I1205 15:29:25.924730 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" path="/var/lib/kubelet/pods/177b99c7-d5b2-494d-a932-9ba7a9acdfec/volumes" Dec 05 15:29:32 crc kubenswrapper[4858]: I1205 15:29:32.080870 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:32 crc kubenswrapper[4858]: I1205 15:29:32.130350 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:32 crc kubenswrapper[4858]: I1205 15:29:32.511283 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cw5mr"] Dec 05 15:29:33 crc kubenswrapper[4858]: I1205 15:29:33.712721 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cw5mr" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="registry-server" containerID="cri-o://4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df" gracePeriod=2 Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.194979 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.373759 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg5zq\" (UniqueName: \"kubernetes.io/projected/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-kube-api-access-hg5zq\") pod \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.374094 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-utilities\") pod \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.374115 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-catalog-content\") pod \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\" (UID: \"54ecb1d5-64b4-47e3-97ac-251afd7e51d8\") " Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.374554 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-utilities" (OuterVolumeSpecName: "utilities") pod "54ecb1d5-64b4-47e3-97ac-251afd7e51d8" (UID: "54ecb1d5-64b4-47e3-97ac-251afd7e51d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.378839 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-kube-api-access-hg5zq" (OuterVolumeSpecName: "kube-api-access-hg5zq") pod "54ecb1d5-64b4-47e3-97ac-251afd7e51d8" (UID: "54ecb1d5-64b4-47e3-97ac-251afd7e51d8"). InnerVolumeSpecName "kube-api-access-hg5zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.477049 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.477081 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg5zq\" (UniqueName: \"kubernetes.io/projected/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-kube-api-access-hg5zq\") on node \"crc\" DevicePath \"\"" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.533557 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54ecb1d5-64b4-47e3-97ac-251afd7e51d8" (UID: "54ecb1d5-64b4-47e3-97ac-251afd7e51d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.577805 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54ecb1d5-64b4-47e3-97ac-251afd7e51d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.723026 4858 generic.go:334] "Generic (PLEG): container finished" podID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerID="4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df" exitCode=0 Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.723065 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerDied","Data":"4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df"} Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.723093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cw5mr" event={"ID":"54ecb1d5-64b4-47e3-97ac-251afd7e51d8","Type":"ContainerDied","Data":"c30bd0a0a4c5de1f7f68f149b5bef44fa1d5199ba95185afc313423fb7f1c4b8"} Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.723109 4858 scope.go:117] "RemoveContainer" containerID="4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.723103 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cw5mr" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.741721 4858 scope.go:117] "RemoveContainer" containerID="6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.770365 4858 scope.go:117] "RemoveContainer" containerID="178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.774509 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cw5mr"] Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.783314 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cw5mr"] Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.806097 4858 scope.go:117] "RemoveContainer" containerID="4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df" Dec 05 15:29:34 crc kubenswrapper[4858]: E1205 15:29:34.806843 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df\": container with ID starting with 4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df not found: ID does not exist" containerID="4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.806898 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df"} err="failed to get container status \"4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df\": rpc error: code = NotFound desc = could not find container \"4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df\": container with ID starting with 4af70dfa071981f061b9cac1595e2f2918db2247854896b0e7be74ab2df5b2df not found: ID does not exist" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.806934 4858 scope.go:117] "RemoveContainer" containerID="6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802" Dec 05 15:29:34 crc kubenswrapper[4858]: E1205 15:29:34.807291 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802\": container with ID starting with 6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802 not found: ID does not exist" containerID="6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.807336 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802"} err="failed to get container status \"6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802\": rpc error: code = NotFound desc = could not find container \"6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802\": container with ID starting with 6c38cb2a088bb3de8cf25c20d601b443a8f0afe3a20594e3835fb2fa63709802 not found: ID does not exist" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.807370 4858 scope.go:117] "RemoveContainer" containerID="178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73" Dec 05 15:29:34 crc kubenswrapper[4858]: E1205 15:29:34.807909 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73\": container with ID starting with 178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73 not found: ID does not exist" containerID="178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73" Dec 05 15:29:34 crc kubenswrapper[4858]: I1205 15:29:34.807931 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73"} err="failed to get container status \"178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73\": rpc error: code = NotFound desc = could not find container \"178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73\": container with ID starting with 178b715623d31f6fbb7a9f987ecbed71ed0b14224f2b8c8d280dd49bb1031b73 not found: ID does not exist" Dec 05 15:29:35 crc kubenswrapper[4858]: I1205 15:29:35.912094 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" path="/var/lib/kubelet/pods/54ecb1d5-64b4-47e3-97ac-251afd7e51d8/volumes" Dec 05 15:29:39 crc kubenswrapper[4858]: I1205 15:29:39.899565 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:29:39 crc kubenswrapper[4858]: E1205 15:29:39.900352 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:29:52 crc kubenswrapper[4858]: I1205 15:29:52.899755 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:29:55 crc kubenswrapper[4858]: I1205 15:29:55.055750 4858 trace.go:236] Trace[502336470]: "Calculate volume metrics of trusted-ca-bundle for pod openshift-console/console-85b6884698-jg67f" (05-Dec-2025 15:29:53.876) (total time: 1178ms): Dec 05 15:29:55 crc kubenswrapper[4858]: Trace[502336470]: [1.178122922s] [1.178122922s] END Dec 05 15:29:56 crc kubenswrapper[4858]: I1205 15:29:56.981985 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d"} Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.319570 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp"] Dec 05 15:30:00 crc kubenswrapper[4858]: E1205 15:30:00.320544 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="extract-content" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.320563 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="extract-content" Dec 05 15:30:00 crc kubenswrapper[4858]: E1205 15:30:00.320583 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="extract-utilities" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.320591 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="extract-utilities" Dec 05 15:30:00 crc kubenswrapper[4858]: E1205 15:30:00.320610 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="extract-utilities" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.320621 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="extract-utilities" Dec 05 15:30:00 crc kubenswrapper[4858]: E1205 15:30:00.320632 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="registry-server" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.320638 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="registry-server" Dec 05 15:30:00 crc kubenswrapper[4858]: E1205 15:30:00.320654 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="extract-content" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.320661 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="extract-content" Dec 05 15:30:00 crc kubenswrapper[4858]: E1205 15:30:00.320673 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="registry-server" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.320679 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="registry-server" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.321106 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="177b99c7-d5b2-494d-a932-9ba7a9acdfec" containerName="registry-server" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.321141 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ecb1d5-64b4-47e3-97ac-251afd7e51d8" containerName="registry-server" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.321917 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.329602 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.329605 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.330638 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp"] Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.447727 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf248\" (UniqueName: \"kubernetes.io/projected/ecd76b24-71a2-414a-8e3c-0a8bc7305386-kube-api-access-cf248\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.448072 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecd76b24-71a2-414a-8e3c-0a8bc7305386-secret-volume\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.448245 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecd76b24-71a2-414a-8e3c-0a8bc7305386-config-volume\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.549876 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf248\" (UniqueName: \"kubernetes.io/projected/ecd76b24-71a2-414a-8e3c-0a8bc7305386-kube-api-access-cf248\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.549961 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecd76b24-71a2-414a-8e3c-0a8bc7305386-secret-volume\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.550105 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecd76b24-71a2-414a-8e3c-0a8bc7305386-config-volume\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.551092 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecd76b24-71a2-414a-8e3c-0a8bc7305386-config-volume\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.556502 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecd76b24-71a2-414a-8e3c-0a8bc7305386-secret-volume\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.586680 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf248\" (UniqueName: \"kubernetes.io/projected/ecd76b24-71a2-414a-8e3c-0a8bc7305386-kube-api-access-cf248\") pod \"collect-profiles-29415810-4p7cp\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:00 crc kubenswrapper[4858]: I1205 15:30:00.645276 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:01 crc kubenswrapper[4858]: I1205 15:30:01.394538 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp"] Dec 05 15:30:02 crc kubenswrapper[4858]: I1205 15:30:02.052599 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" event={"ID":"ecd76b24-71a2-414a-8e3c-0a8bc7305386","Type":"ContainerStarted","Data":"447d537ca6f23c8007a0dcde6ed2034e393f282c561d6cc515b78ca292e53063"} Dec 05 15:30:02 crc kubenswrapper[4858]: I1205 15:30:02.052642 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" event={"ID":"ecd76b24-71a2-414a-8e3c-0a8bc7305386","Type":"ContainerStarted","Data":"0bebfc565f6caa731cd772b17396b1227d0c9dfe0ee768d5d23d422c1913e2ad"} Dec 05 15:30:02 crc kubenswrapper[4858]: I1205 15:30:02.093023 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" podStartSLOduration=2.092986069 podStartE2EDuration="2.092986069s" podCreationTimestamp="2025-12-05 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 15:30:02.080570543 +0000 UTC m=+5610.628168682" watchObservedRunningTime="2025-12-05 15:30:02.092986069 +0000 UTC m=+5610.640584208" Dec 05 15:30:03 crc kubenswrapper[4858]: I1205 15:30:03.080068 4858 generic.go:334] "Generic (PLEG): container finished" podID="ecd76b24-71a2-414a-8e3c-0a8bc7305386" containerID="447d537ca6f23c8007a0dcde6ed2034e393f282c561d6cc515b78ca292e53063" exitCode=0 Dec 05 15:30:03 crc kubenswrapper[4858]: I1205 15:30:03.080155 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" event={"ID":"ecd76b24-71a2-414a-8e3c-0a8bc7305386","Type":"ContainerDied","Data":"447d537ca6f23c8007a0dcde6ed2034e393f282c561d6cc515b78ca292e53063"} Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.453023 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.459093 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf248\" (UniqueName: \"kubernetes.io/projected/ecd76b24-71a2-414a-8e3c-0a8bc7305386-kube-api-access-cf248\") pod \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.459136 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecd76b24-71a2-414a-8e3c-0a8bc7305386-config-volume\") pod \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.459163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecd76b24-71a2-414a-8e3c-0a8bc7305386-secret-volume\") pod \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\" (UID: \"ecd76b24-71a2-414a-8e3c-0a8bc7305386\") " Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.459943 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd76b24-71a2-414a-8e3c-0a8bc7305386-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecd76b24-71a2-414a-8e3c-0a8bc7305386" (UID: "ecd76b24-71a2-414a-8e3c-0a8bc7305386"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.460443 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecd76b24-71a2-414a-8e3c-0a8bc7305386-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.466449 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd76b24-71a2-414a-8e3c-0a8bc7305386-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecd76b24-71a2-414a-8e3c-0a8bc7305386" (UID: "ecd76b24-71a2-414a-8e3c-0a8bc7305386"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.467362 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd76b24-71a2-414a-8e3c-0a8bc7305386-kube-api-access-cf248" (OuterVolumeSpecName: "kube-api-access-cf248") pod "ecd76b24-71a2-414a-8e3c-0a8bc7305386" (UID: "ecd76b24-71a2-414a-8e3c-0a8bc7305386"). InnerVolumeSpecName "kube-api-access-cf248". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.597306 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf248\" (UniqueName: \"kubernetes.io/projected/ecd76b24-71a2-414a-8e3c-0a8bc7305386-kube-api-access-cf248\") on node \"crc\" DevicePath \"\"" Dec 05 15:30:04 crc kubenswrapper[4858]: I1205 15:30:04.597344 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecd76b24-71a2-414a-8e3c-0a8bc7305386-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:30:05 crc kubenswrapper[4858]: I1205 15:30:05.099578 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" event={"ID":"ecd76b24-71a2-414a-8e3c-0a8bc7305386","Type":"ContainerDied","Data":"0bebfc565f6caa731cd772b17396b1227d0c9dfe0ee768d5d23d422c1913e2ad"} Dec 05 15:30:05 crc kubenswrapper[4858]: I1205 15:30:05.099634 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp" Dec 05 15:30:05 crc kubenswrapper[4858]: I1205 15:30:05.099613 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bebfc565f6caa731cd772b17396b1227d0c9dfe0ee768d5d23d422c1913e2ad" Dec 05 15:30:05 crc kubenswrapper[4858]: I1205 15:30:05.538087 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp"] Dec 05 15:30:05 crc kubenswrapper[4858]: I1205 15:30:05.546697 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415765-s54kp"] Dec 05 15:30:05 crc kubenswrapper[4858]: I1205 15:30:05.918861 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe05d25-a105-41a6-9443-eee7578072c4" path="/var/lib/kubelet/pods/afe05d25-a105-41a6-9443-eee7578072c4/volumes" Dec 05 15:31:02 crc kubenswrapper[4858]: I1205 15:31:02.119363 4858 scope.go:117] "RemoveContainer" containerID="0c2137015b02687e1160d93a6dce359fcf707af437d4cc5bc28b8d0f8df676dc" Dec 05 15:32:14 crc kubenswrapper[4858]: I1205 15:32:14.762398 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:32:14 crc kubenswrapper[4858]: I1205 15:32:14.763766 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:32:44 crc kubenswrapper[4858]: I1205 15:32:44.760078 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:32:44 crc kubenswrapper[4858]: I1205 15:32:44.760623 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:33:14 crc kubenswrapper[4858]: I1205 15:33:14.759729 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:33:14 crc kubenswrapper[4858]: I1205 15:33:14.760631 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:33:14 crc kubenswrapper[4858]: I1205 15:33:14.760711 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:33:14 crc kubenswrapper[4858]: I1205 15:33:14.762167 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:33:14 crc kubenswrapper[4858]: I1205 15:33:14.762240 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d" gracePeriod=600 Dec 05 15:33:14 crc kubenswrapper[4858]: E1205 15:33:14.931058 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ab8742a_625e_4bb8_9329_31f39a34fe48.slice/crio-869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d.scope\": RecentStats: unable to find data in memory cache]" Dec 05 15:33:15 crc kubenswrapper[4858]: I1205 15:33:15.783403 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d" exitCode=0 Dec 05 15:33:15 crc kubenswrapper[4858]: I1205 15:33:15.783474 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d"} Dec 05 15:33:15 crc kubenswrapper[4858]: I1205 15:33:15.783730 4858 scope.go:117] "RemoveContainer" containerID="f1ce991058367eacfa9142315f3d788a5ab4bbb354037047391fb59f92cd8a00" Dec 05 15:33:16 crc kubenswrapper[4858]: I1205 15:33:16.833214 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922"} Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.910913 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h6cs6"] Dec 05 15:33:33 crc kubenswrapper[4858]: E1205 15:33:33.912110 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd76b24-71a2-414a-8e3c-0a8bc7305386" containerName="collect-profiles" Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.912129 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd76b24-71a2-414a-8e3c-0a8bc7305386" containerName="collect-profiles" Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.912354 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd76b24-71a2-414a-8e3c-0a8bc7305386" containerName="collect-profiles" Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.914023 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.924261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6cs6"] Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.981660 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-utilities\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.981884 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-catalog-content\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:33 crc kubenswrapper[4858]: I1205 15:33:33.982257 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vwvz\" (UniqueName: \"kubernetes.io/projected/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-kube-api-access-9vwvz\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.084227 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vwvz\" (UniqueName: \"kubernetes.io/projected/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-kube-api-access-9vwvz\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.084283 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-utilities\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.084350 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-catalog-content\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.084955 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-utilities\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.084976 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-catalog-content\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.105171 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vwvz\" (UniqueName: \"kubernetes.io/projected/6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8-kube-api-access-9vwvz\") pod \"certified-operators-h6cs6\" (UID: \"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8\") " pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.236059 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.739155 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6cs6"] Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.985893 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6cs6" event={"ID":"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8","Type":"ContainerStarted","Data":"2bafe9206eaddb9bc22d392b877db48c6d84760fca5047c48686cf0b7cd16b3e"} Dec 05 15:33:34 crc kubenswrapper[4858]: I1205 15:33:34.986149 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6cs6" event={"ID":"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8","Type":"ContainerStarted","Data":"52f54894fcdd455738dc6dcd8b269d65dece4ed8e764eed2880161a06f3c9409"} Dec 05 15:33:35 crc kubenswrapper[4858]: I1205 15:33:35.997318 4858 generic.go:334] "Generic (PLEG): container finished" podID="6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8" containerID="2bafe9206eaddb9bc22d392b877db48c6d84760fca5047c48686cf0b7cd16b3e" exitCode=0 Dec 05 15:33:35 crc kubenswrapper[4858]: I1205 15:33:35.997493 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6cs6" event={"ID":"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8","Type":"ContainerDied","Data":"2bafe9206eaddb9bc22d392b877db48c6d84760fca5047c48686cf0b7cd16b3e"} Dec 05 15:33:43 crc kubenswrapper[4858]: I1205 15:33:43.059303 4858 generic.go:334] "Generic (PLEG): container finished" podID="6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8" containerID="748e90acfe2c1f5718a891f9d17d2943df16d014e8447a6ced17b529098e5504" exitCode=0 Dec 05 15:33:43 crc kubenswrapper[4858]: I1205 15:33:43.059496 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6cs6" event={"ID":"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8","Type":"ContainerDied","Data":"748e90acfe2c1f5718a891f9d17d2943df16d014e8447a6ced17b529098e5504"} Dec 05 15:33:46 crc kubenswrapper[4858]: I1205 15:33:46.087646 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h6cs6" event={"ID":"6c1c7f6c-e7fc-47c8-b639-5aba6ceac8b8","Type":"ContainerStarted","Data":"3859cb1ccbfcae64389220e68458bf9f867107c3b420288c4b222dbf73ffc2cf"} Dec 05 15:33:46 crc kubenswrapper[4858]: I1205 15:33:46.109462 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h6cs6" podStartSLOduration=4.142525966 podStartE2EDuration="13.109439766s" podCreationTimestamp="2025-12-05 15:33:33 +0000 UTC" firstStartedPulling="2025-12-05 15:33:36.000023049 +0000 UTC m=+5824.547621188" lastFinishedPulling="2025-12-05 15:33:44.966936849 +0000 UTC m=+5833.514534988" observedRunningTime="2025-12-05 15:33:46.103060332 +0000 UTC m=+5834.650658471" watchObservedRunningTime="2025-12-05 15:33:46.109439766 +0000 UTC m=+5834.657037905" Dec 05 15:33:54 crc kubenswrapper[4858]: I1205 15:33:54.236637 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:54 crc kubenswrapper[4858]: I1205 15:33:54.237278 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:54 crc kubenswrapper[4858]: I1205 15:33:54.289606 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:54 crc kubenswrapper[4858]: I1205 15:33:54.720153 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-57548d458d-t8ww2" podUID="4c9d3c6a-fda7-468e-9099-5f09c2dbdbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:33:55 crc kubenswrapper[4858]: I1205 15:33:55.217711 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h6cs6" Dec 05 15:33:55 crc kubenswrapper[4858]: I1205 15:33:55.280812 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h6cs6"] Dec 05 15:33:55 crc kubenswrapper[4858]: I1205 15:33:55.332383 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4n4r2"] Dec 05 15:33:55 crc kubenswrapper[4858]: I1205 15:33:55.332812 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4n4r2" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" containerID="cri-o://9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20" gracePeriod=2 Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.026235 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.182281 4858 generic.go:334] "Generic (PLEG): container finished" podID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerID="9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20" exitCode=0 Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.183783 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4n4r2" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.183936 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerDied","Data":"9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20"} Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.184140 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4n4r2" event={"ID":"cb1143a5-8f39-460c-9d9c-121a877118b9","Type":"ContainerDied","Data":"9ff0010cfac3937df04fe4d2dc799ce3b32a61362ccee241d452d0795bfa58de"} Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.184185 4858 scope.go:117] "RemoveContainer" containerID="9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.198676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-catalog-content\") pod \"cb1143a5-8f39-460c-9d9c-121a877118b9\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.198777 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-utilities\") pod \"cb1143a5-8f39-460c-9d9c-121a877118b9\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.198817 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8pd8\" (UniqueName: \"kubernetes.io/projected/cb1143a5-8f39-460c-9d9c-121a877118b9-kube-api-access-r8pd8\") pod \"cb1143a5-8f39-460c-9d9c-121a877118b9\" (UID: \"cb1143a5-8f39-460c-9d9c-121a877118b9\") " Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.199677 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-utilities" (OuterVolumeSpecName: "utilities") pod "cb1143a5-8f39-460c-9d9c-121a877118b9" (UID: "cb1143a5-8f39-460c-9d9c-121a877118b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.211141 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1143a5-8f39-460c-9d9c-121a877118b9-kube-api-access-r8pd8" (OuterVolumeSpecName: "kube-api-access-r8pd8") pod "cb1143a5-8f39-460c-9d9c-121a877118b9" (UID: "cb1143a5-8f39-460c-9d9c-121a877118b9"). InnerVolumeSpecName "kube-api-access-r8pd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.227255 4858 scope.go:117] "RemoveContainer" containerID="0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.275313 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb1143a5-8f39-460c-9d9c-121a877118b9" (UID: "cb1143a5-8f39-460c-9d9c-121a877118b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.298932 4858 scope.go:117] "RemoveContainer" containerID="3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.302253 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.302436 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1143a5-8f39-460c-9d9c-121a877118b9-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.302511 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8pd8\" (UniqueName: \"kubernetes.io/projected/cb1143a5-8f39-460c-9d9c-121a877118b9-kube-api-access-r8pd8\") on node \"crc\" DevicePath \"\"" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.339255 4858 scope.go:117] "RemoveContainer" containerID="9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20" Dec 05 15:33:56 crc kubenswrapper[4858]: E1205 15:33:56.340314 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20\": container with ID starting with 9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20 not found: ID does not exist" containerID="9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.340346 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20"} err="failed to get container status \"9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20\": rpc error: code = NotFound desc = could not find container \"9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20\": container with ID starting with 9d650fe0f99678d28a0d7f91a7bc79377fd0957b8937f913bedabfe83cbe7e20 not found: ID does not exist" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.340365 4858 scope.go:117] "RemoveContainer" containerID="0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d" Dec 05 15:33:56 crc kubenswrapper[4858]: E1205 15:33:56.341847 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d\": container with ID starting with 0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d not found: ID does not exist" containerID="0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.341881 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d"} err="failed to get container status \"0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d\": rpc error: code = NotFound desc = could not find container \"0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d\": container with ID starting with 0cd42fe30262f9d287ef30acbc0e2f7c00b548cf6ac68d7ecd008b32b335a09d not found: ID does not exist" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.341916 4858 scope.go:117] "RemoveContainer" containerID="3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b" Dec 05 15:33:56 crc kubenswrapper[4858]: E1205 15:33:56.342289 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b\": container with ID starting with 3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b not found: ID does not exist" containerID="3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.342335 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b"} err="failed to get container status \"3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b\": rpc error: code = NotFound desc = could not find container \"3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b\": container with ID starting with 3184437a2594352a106c6146bcf266604b88c179290df62d0a82df54fe38fd9b not found: ID does not exist" Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.522212 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4n4r2"] Dec 05 15:33:56 crc kubenswrapper[4858]: I1205 15:33:56.530709 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4n4r2"] Dec 05 15:33:57 crc kubenswrapper[4858]: I1205 15:33:57.911727 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" path="/var/lib/kubelet/pods/cb1143a5-8f39-460c-9d9c-121a877118b9/volumes" Dec 05 15:35:44 crc kubenswrapper[4858]: I1205 15:35:44.759499 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:35:44 crc kubenswrapper[4858]: I1205 15:35:44.760117 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:36:14 crc kubenswrapper[4858]: I1205 15:36:14.759885 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:36:14 crc kubenswrapper[4858]: I1205 15:36:14.760555 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:36:44 crc kubenswrapper[4858]: I1205 15:36:44.759600 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:36:44 crc kubenswrapper[4858]: I1205 15:36:44.760126 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:36:44 crc kubenswrapper[4858]: I1205 15:36:44.760180 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:36:44 crc kubenswrapper[4858]: I1205 15:36:44.760919 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:36:44 crc kubenswrapper[4858]: I1205 15:36:44.760972 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" gracePeriod=600 Dec 05 15:36:44 crc kubenswrapper[4858]: E1205 15:36:44.884985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:36:45 crc kubenswrapper[4858]: I1205 15:36:45.711855 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" exitCode=0 Dec 05 15:36:45 crc kubenswrapper[4858]: I1205 15:36:45.711860 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922"} Dec 05 15:36:45 crc kubenswrapper[4858]: I1205 15:36:45.712279 4858 scope.go:117] "RemoveContainer" containerID="869f92cbb584b0423ed653719716bd235fabec973239a518db2d7b48502cae4d" Dec 05 15:36:45 crc kubenswrapper[4858]: I1205 15:36:45.713057 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:36:45 crc kubenswrapper[4858]: E1205 15:36:45.713399 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:36:57 crc kubenswrapper[4858]: I1205 15:36:57.899335 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:36:57 crc kubenswrapper[4858]: E1205 15:36:57.900081 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:37:10 crc kubenswrapper[4858]: I1205 15:37:10.900356 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:37:10 crc kubenswrapper[4858]: E1205 15:37:10.901073 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:37:25 crc kubenswrapper[4858]: I1205 15:37:25.899530 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:37:25 crc kubenswrapper[4858]: E1205 15:37:25.900291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.617110 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rgdfk"] Dec 05 15:37:40 crc kubenswrapper[4858]: E1205 15:37:40.618390 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.618415 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" Dec 05 15:37:40 crc kubenswrapper[4858]: E1205 15:37:40.618470 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="extract-content" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.618483 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="extract-content" Dec 05 15:37:40 crc kubenswrapper[4858]: E1205 15:37:40.618517 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="extract-utilities" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.618532 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="extract-utilities" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.618913 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1143a5-8f39-460c-9d9c-121a877118b9" containerName="registry-server" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.621744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.627581 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgdfk"] Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.791258 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-catalog-content\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.791743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jgb\" (UniqueName: \"kubernetes.io/projected/da738548-c149-4f9d-91e9-9ae5b977800b-kube-api-access-f4jgb\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.791990 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-utilities\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.893728 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-catalog-content\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.893807 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4jgb\" (UniqueName: \"kubernetes.io/projected/da738548-c149-4f9d-91e9-9ae5b977800b-kube-api-access-f4jgb\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.893896 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-utilities\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.894814 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-catalog-content\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.894884 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-utilities\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.899412 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:37:40 crc kubenswrapper[4858]: E1205 15:37:40.899685 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.918228 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4jgb\" (UniqueName: \"kubernetes.io/projected/da738548-c149-4f9d-91e9-9ae5b977800b-kube-api-access-f4jgb\") pod \"community-operators-rgdfk\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:40 crc kubenswrapper[4858]: I1205 15:37:40.954390 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:41 crc kubenswrapper[4858]: I1205 15:37:41.440708 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgdfk"] Dec 05 15:37:42 crc kubenswrapper[4858]: I1205 15:37:42.439151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerStarted","Data":"39d54ab7f76dcd0bd69f0295c871b58b88a967a74759788587b63c82d1ee31c2"} Dec 05 15:37:43 crc kubenswrapper[4858]: I1205 15:37:43.448590 4858 generic.go:334] "Generic (PLEG): container finished" podID="da738548-c149-4f9d-91e9-9ae5b977800b" containerID="3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0" exitCode=0 Dec 05 15:37:43 crc kubenswrapper[4858]: I1205 15:37:43.448642 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerDied","Data":"3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0"} Dec 05 15:37:43 crc kubenswrapper[4858]: I1205 15:37:43.451084 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:37:44 crc kubenswrapper[4858]: I1205 15:37:44.458606 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerStarted","Data":"1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13"} Dec 05 15:37:47 crc kubenswrapper[4858]: I1205 15:37:47.487754 4858 generic.go:334] "Generic (PLEG): container finished" podID="da738548-c149-4f9d-91e9-9ae5b977800b" containerID="1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13" exitCode=0 Dec 05 15:37:47 crc kubenswrapper[4858]: I1205 15:37:47.488814 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerDied","Data":"1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13"} Dec 05 15:37:49 crc kubenswrapper[4858]: I1205 15:37:49.506636 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerStarted","Data":"513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e"} Dec 05 15:37:49 crc kubenswrapper[4858]: I1205 15:37:49.536510 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rgdfk" podStartSLOduration=4.6831798540000005 podStartE2EDuration="9.53649191s" podCreationTimestamp="2025-12-05 15:37:40 +0000 UTC" firstStartedPulling="2025-12-05 15:37:43.450541195 +0000 UTC m=+6071.998139334" lastFinishedPulling="2025-12-05 15:37:48.303853251 +0000 UTC m=+6076.851451390" observedRunningTime="2025-12-05 15:37:49.528384162 +0000 UTC m=+6078.075982301" watchObservedRunningTime="2025-12-05 15:37:49.53649191 +0000 UTC m=+6078.084090049" Dec 05 15:37:50 crc kubenswrapper[4858]: I1205 15:37:50.954941 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:50 crc kubenswrapper[4858]: I1205 15:37:50.955463 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:37:52 crc kubenswrapper[4858]: I1205 15:37:52.032147 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rgdfk" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:37:52 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:37:52 crc kubenswrapper[4858]: > Dec 05 15:37:55 crc kubenswrapper[4858]: I1205 15:37:55.898750 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:37:55 crc kubenswrapper[4858]: E1205 15:37:55.899579 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:38:01 crc kubenswrapper[4858]: I1205 15:38:01.006684 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:38:01 crc kubenswrapper[4858]: I1205 15:38:01.056942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:38:01 crc kubenswrapper[4858]: I1205 15:38:01.248057 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgdfk"] Dec 05 15:38:02 crc kubenswrapper[4858]: I1205 15:38:02.618270 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rgdfk" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="registry-server" containerID="cri-o://513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e" gracePeriod=2 Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.139770 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.235182 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-utilities\") pod \"da738548-c149-4f9d-91e9-9ae5b977800b\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.235249 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-catalog-content\") pod \"da738548-c149-4f9d-91e9-9ae5b977800b\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.235391 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4jgb\" (UniqueName: \"kubernetes.io/projected/da738548-c149-4f9d-91e9-9ae5b977800b-kube-api-access-f4jgb\") pod \"da738548-c149-4f9d-91e9-9ae5b977800b\" (UID: \"da738548-c149-4f9d-91e9-9ae5b977800b\") " Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.237023 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-utilities" (OuterVolumeSpecName: "utilities") pod "da738548-c149-4f9d-91e9-9ae5b977800b" (UID: "da738548-c149-4f9d-91e9-9ae5b977800b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.237410 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.246124 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da738548-c149-4f9d-91e9-9ae5b977800b-kube-api-access-f4jgb" (OuterVolumeSpecName: "kube-api-access-f4jgb") pod "da738548-c149-4f9d-91e9-9ae5b977800b" (UID: "da738548-c149-4f9d-91e9-9ae5b977800b"). InnerVolumeSpecName "kube-api-access-f4jgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.287487 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da738548-c149-4f9d-91e9-9ae5b977800b" (UID: "da738548-c149-4f9d-91e9-9ae5b977800b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.340162 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da738548-c149-4f9d-91e9-9ae5b977800b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.340194 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4jgb\" (UniqueName: \"kubernetes.io/projected/da738548-c149-4f9d-91e9-9ae5b977800b-kube-api-access-f4jgb\") on node \"crc\" DevicePath \"\"" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.628434 4858 generic.go:334] "Generic (PLEG): container finished" podID="da738548-c149-4f9d-91e9-9ae5b977800b" containerID="513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e" exitCode=0 Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.628478 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerDied","Data":"513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e"} Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.628497 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgdfk" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.628517 4858 scope.go:117] "RemoveContainer" containerID="513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.628505 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgdfk" event={"ID":"da738548-c149-4f9d-91e9-9ae5b977800b","Type":"ContainerDied","Data":"39d54ab7f76dcd0bd69f0295c871b58b88a967a74759788587b63c82d1ee31c2"} Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.656398 4858 scope.go:117] "RemoveContainer" containerID="1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.663005 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgdfk"] Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.676843 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rgdfk"] Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.678664 4858 scope.go:117] "RemoveContainer" containerID="3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.742467 4858 scope.go:117] "RemoveContainer" containerID="513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e" Dec 05 15:38:03 crc kubenswrapper[4858]: E1205 15:38:03.743172 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e\": container with ID starting with 513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e not found: ID does not exist" containerID="513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.743216 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e"} err="failed to get container status \"513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e\": rpc error: code = NotFound desc = could not find container \"513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e\": container with ID starting with 513f61eea7b9107c2be4165a6ba536626c913b7b33046eb5de4be095de42a33e not found: ID does not exist" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.743259 4858 scope.go:117] "RemoveContainer" containerID="1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13" Dec 05 15:38:03 crc kubenswrapper[4858]: E1205 15:38:03.743727 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13\": container with ID starting with 1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13 not found: ID does not exist" containerID="1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.743759 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13"} err="failed to get container status \"1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13\": rpc error: code = NotFound desc = could not find container \"1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13\": container with ID starting with 1c6b4f70aee731a3d8c127370c455d57b94c5e199513735e1d22b54caaa9dd13 not found: ID does not exist" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.743780 4858 scope.go:117] "RemoveContainer" containerID="3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0" Dec 05 15:38:03 crc kubenswrapper[4858]: E1205 15:38:03.744160 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0\": container with ID starting with 3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0 not found: ID does not exist" containerID="3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.744180 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0"} err="failed to get container status \"3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0\": rpc error: code = NotFound desc = could not find container \"3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0\": container with ID starting with 3dc735f1d14b811772578de5f116591bf5a61136cb5f1783b2222235f65871e0 not found: ID does not exist" Dec 05 15:38:03 crc kubenswrapper[4858]: I1205 15:38:03.908848 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" path="/var/lib/kubelet/pods/da738548-c149-4f9d-91e9-9ae5b977800b/volumes" Dec 05 15:38:08 crc kubenswrapper[4858]: I1205 15:38:08.899160 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:38:08 crc kubenswrapper[4858]: E1205 15:38:08.899909 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:38:23 crc kubenswrapper[4858]: I1205 15:38:23.899160 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:38:23 crc kubenswrapper[4858]: E1205 15:38:23.900805 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:38:36 crc kubenswrapper[4858]: I1205 15:38:36.899689 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:38:36 crc kubenswrapper[4858]: E1205 15:38:36.900399 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:38:51 crc kubenswrapper[4858]: I1205 15:38:51.907816 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:38:51 crc kubenswrapper[4858]: E1205 15:38:51.908562 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:39:05 crc kubenswrapper[4858]: I1205 15:39:05.899688 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:39:05 crc kubenswrapper[4858]: E1205 15:39:05.900376 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:39:17 crc kubenswrapper[4858]: I1205 15:39:17.904090 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:39:17 crc kubenswrapper[4858]: E1205 15:39:17.904886 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:39:28 crc kubenswrapper[4858]: I1205 15:39:28.900030 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:39:28 crc kubenswrapper[4858]: E1205 15:39:28.901747 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.913191 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-55jgs"] Dec 05 15:39:30 crc kubenswrapper[4858]: E1205 15:39:30.913890 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="extract-utilities" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.913910 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="extract-utilities" Dec 05 15:39:30 crc kubenswrapper[4858]: E1205 15:39:30.913959 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="registry-server" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.913967 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="registry-server" Dec 05 15:39:30 crc kubenswrapper[4858]: E1205 15:39:30.913986 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="extract-content" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.913995 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="extract-content" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.914200 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="da738548-c149-4f9d-91e9-9ae5b977800b" containerName="registry-server" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.915998 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:30 crc kubenswrapper[4858]: I1205 15:39:30.928740 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-55jgs"] Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.059355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-catalog-content\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.059645 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-utilities\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.059690 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfm2t\" (UniqueName: \"kubernetes.io/projected/b72719ae-7338-4d95-95a3-bf0b42d694a4-kube-api-access-tfm2t\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.161837 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-catalog-content\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.161890 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-utilities\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.161940 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfm2t\" (UniqueName: \"kubernetes.io/projected/b72719ae-7338-4d95-95a3-bf0b42d694a4-kube-api-access-tfm2t\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.162688 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-catalog-content\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.162800 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-utilities\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.181001 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfm2t\" (UniqueName: \"kubernetes.io/projected/b72719ae-7338-4d95-95a3-bf0b42d694a4-kube-api-access-tfm2t\") pod \"redhat-marketplace-55jgs\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.253257 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:31 crc kubenswrapper[4858]: I1205 15:39:31.881470 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-55jgs"] Dec 05 15:39:32 crc kubenswrapper[4858]: I1205 15:39:32.401856 4858 generic.go:334] "Generic (PLEG): container finished" podID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerID="25e5cea52e200f1fccbce056859dcd2840b508f328b21925c5c4224fad4adfaf" exitCode=0 Dec 05 15:39:32 crc kubenswrapper[4858]: I1205 15:39:32.401902 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerDied","Data":"25e5cea52e200f1fccbce056859dcd2840b508f328b21925c5c4224fad4adfaf"} Dec 05 15:39:32 crc kubenswrapper[4858]: I1205 15:39:32.401926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerStarted","Data":"968631cfc24efd1cade665d12022a0658acdf0e3a96bc212a2ac99d86347319f"} Dec 05 15:39:33 crc kubenswrapper[4858]: I1205 15:39:33.411721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerStarted","Data":"905752bea467a5542628cef8aba8099ddf4cd2d199ee7d2998fd61c5be1f4972"} Dec 05 15:39:34 crc kubenswrapper[4858]: I1205 15:39:34.422098 4858 generic.go:334] "Generic (PLEG): container finished" podID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerID="905752bea467a5542628cef8aba8099ddf4cd2d199ee7d2998fd61c5be1f4972" exitCode=0 Dec 05 15:39:34 crc kubenswrapper[4858]: I1205 15:39:34.422373 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerDied","Data":"905752bea467a5542628cef8aba8099ddf4cd2d199ee7d2998fd61c5be1f4972"} Dec 05 15:39:35 crc kubenswrapper[4858]: I1205 15:39:35.433629 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerStarted","Data":"9719e8b4528ff56d882b6adc5819a9cd17adf5a09eb64146a117cf630e3ac2a1"} Dec 05 15:39:35 crc kubenswrapper[4858]: I1205 15:39:35.452654 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-55jgs" podStartSLOduration=3.035692302 podStartE2EDuration="5.452638081s" podCreationTimestamp="2025-12-05 15:39:30 +0000 UTC" firstStartedPulling="2025-12-05 15:39:32.403425924 +0000 UTC m=+6180.951024063" lastFinishedPulling="2025-12-05 15:39:34.820371703 +0000 UTC m=+6183.367969842" observedRunningTime="2025-12-05 15:39:35.451176951 +0000 UTC m=+6183.998775100" watchObservedRunningTime="2025-12-05 15:39:35.452638081 +0000 UTC m=+6184.000236220" Dec 05 15:39:40 crc kubenswrapper[4858]: I1205 15:39:40.899162 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:39:40 crc kubenswrapper[4858]: E1205 15:39:40.899707 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:39:41 crc kubenswrapper[4858]: I1205 15:39:41.253720 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:41 crc kubenswrapper[4858]: I1205 15:39:41.253790 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:41 crc kubenswrapper[4858]: I1205 15:39:41.310710 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:41 crc kubenswrapper[4858]: I1205 15:39:41.538797 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:41 crc kubenswrapper[4858]: I1205 15:39:41.590350 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-55jgs"] Dec 05 15:39:43 crc kubenswrapper[4858]: I1205 15:39:43.503526 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-55jgs" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="registry-server" containerID="cri-o://9719e8b4528ff56d882b6adc5819a9cd17adf5a09eb64146a117cf630e3ac2a1" gracePeriod=2 Dec 05 15:39:44 crc kubenswrapper[4858]: I1205 15:39:44.515996 4858 generic.go:334] "Generic (PLEG): container finished" podID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerID="9719e8b4528ff56d882b6adc5819a9cd17adf5a09eb64146a117cf630e3ac2a1" exitCode=0 Dec 05 15:39:44 crc kubenswrapper[4858]: I1205 15:39:44.516234 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerDied","Data":"9719e8b4528ff56d882b6adc5819a9cd17adf5a09eb64146a117cf630e3ac2a1"} Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.229769 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.351448 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-catalog-content\") pod \"b72719ae-7338-4d95-95a3-bf0b42d694a4\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.351603 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfm2t\" (UniqueName: \"kubernetes.io/projected/b72719ae-7338-4d95-95a3-bf0b42d694a4-kube-api-access-tfm2t\") pod \"b72719ae-7338-4d95-95a3-bf0b42d694a4\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.351731 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-utilities\") pod \"b72719ae-7338-4d95-95a3-bf0b42d694a4\" (UID: \"b72719ae-7338-4d95-95a3-bf0b42d694a4\") " Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.352750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-utilities" (OuterVolumeSpecName: "utilities") pod "b72719ae-7338-4d95-95a3-bf0b42d694a4" (UID: "b72719ae-7338-4d95-95a3-bf0b42d694a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.353296 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.370282 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72719ae-7338-4d95-95a3-bf0b42d694a4-kube-api-access-tfm2t" (OuterVolumeSpecName: "kube-api-access-tfm2t") pod "b72719ae-7338-4d95-95a3-bf0b42d694a4" (UID: "b72719ae-7338-4d95-95a3-bf0b42d694a4"). InnerVolumeSpecName "kube-api-access-tfm2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.375038 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b72719ae-7338-4d95-95a3-bf0b42d694a4" (UID: "b72719ae-7338-4d95-95a3-bf0b42d694a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.459870 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72719ae-7338-4d95-95a3-bf0b42d694a4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.459899 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfm2t\" (UniqueName: \"kubernetes.io/projected/b72719ae-7338-4d95-95a3-bf0b42d694a4-kube-api-access-tfm2t\") on node \"crc\" DevicePath \"\"" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.529480 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55jgs" event={"ID":"b72719ae-7338-4d95-95a3-bf0b42d694a4","Type":"ContainerDied","Data":"968631cfc24efd1cade665d12022a0658acdf0e3a96bc212a2ac99d86347319f"} Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.529538 4858 scope.go:117] "RemoveContainer" containerID="9719e8b4528ff56d882b6adc5819a9cd17adf5a09eb64146a117cf630e3ac2a1" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.530041 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55jgs" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.555211 4858 scope.go:117] "RemoveContainer" containerID="905752bea467a5542628cef8aba8099ddf4cd2d199ee7d2998fd61c5be1f4972" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.585574 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-55jgs"] Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.588214 4858 scope.go:117] "RemoveContainer" containerID="25e5cea52e200f1fccbce056859dcd2840b508f328b21925c5c4224fad4adfaf" Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.596627 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-55jgs"] Dec 05 15:39:45 crc kubenswrapper[4858]: I1205 15:39:45.913481 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" path="/var/lib/kubelet/pods/b72719ae-7338-4d95-95a3-bf0b42d694a4/volumes" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.479753 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dg5kj"] Dec 05 15:39:47 crc kubenswrapper[4858]: E1205 15:39:47.480612 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="registry-server" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.480630 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="registry-server" Dec 05 15:39:47 crc kubenswrapper[4858]: E1205 15:39:47.480650 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="extract-content" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.480658 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="extract-content" Dec 05 15:39:47 crc kubenswrapper[4858]: E1205 15:39:47.480683 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="extract-utilities" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.480691 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="extract-utilities" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.480958 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72719ae-7338-4d95-95a3-bf0b42d694a4" containerName="registry-server" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.483269 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.503417 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dg5kj"] Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.609752 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-utilities\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.609860 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxhm\" (UniqueName: \"kubernetes.io/projected/f51a8bfe-be84-424f-82f0-eac266caec3b-kube-api-access-cwxhm\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.609895 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-catalog-content\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.712462 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxhm\" (UniqueName: \"kubernetes.io/projected/f51a8bfe-be84-424f-82f0-eac266caec3b-kube-api-access-cwxhm\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.712526 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-catalog-content\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.712643 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-utilities\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.713190 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-catalog-content\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.713209 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-utilities\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.734769 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxhm\" (UniqueName: \"kubernetes.io/projected/f51a8bfe-be84-424f-82f0-eac266caec3b-kube-api-access-cwxhm\") pod \"redhat-operators-dg5kj\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:47 crc kubenswrapper[4858]: I1205 15:39:47.817769 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:48 crc kubenswrapper[4858]: I1205 15:39:48.383276 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dg5kj"] Dec 05 15:39:48 crc kubenswrapper[4858]: I1205 15:39:48.556719 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerStarted","Data":"d3342a60b9c05424b17d656f2b531c7e60655913625b615c09d1f630b5bc3092"} Dec 05 15:39:49 crc kubenswrapper[4858]: I1205 15:39:49.571106 4858 generic.go:334] "Generic (PLEG): container finished" podID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerID="635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab" exitCode=0 Dec 05 15:39:49 crc kubenswrapper[4858]: I1205 15:39:49.572849 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerDied","Data":"635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab"} Dec 05 15:39:50 crc kubenswrapper[4858]: I1205 15:39:50.726356 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerStarted","Data":"2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db"} Dec 05 15:39:54 crc kubenswrapper[4858]: I1205 15:39:54.900239 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:39:54 crc kubenswrapper[4858]: E1205 15:39:54.900925 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:39:55 crc kubenswrapper[4858]: I1205 15:39:55.772897 4858 generic.go:334] "Generic (PLEG): container finished" podID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerID="2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db" exitCode=0 Dec 05 15:39:55 crc kubenswrapper[4858]: I1205 15:39:55.773172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerDied","Data":"2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db"} Dec 05 15:39:56 crc kubenswrapper[4858]: I1205 15:39:56.791529 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerStarted","Data":"cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42"} Dec 05 15:39:56 crc kubenswrapper[4858]: I1205 15:39:56.823558 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dg5kj" podStartSLOduration=2.83710589 podStartE2EDuration="9.823538771s" podCreationTimestamp="2025-12-05 15:39:47 +0000 UTC" firstStartedPulling="2025-12-05 15:39:49.579305713 +0000 UTC m=+6198.126903852" lastFinishedPulling="2025-12-05 15:39:56.565738584 +0000 UTC m=+6205.113336733" observedRunningTime="2025-12-05 15:39:56.815462123 +0000 UTC m=+6205.363060282" watchObservedRunningTime="2025-12-05 15:39:56.823538771 +0000 UTC m=+6205.371136910" Dec 05 15:39:57 crc kubenswrapper[4858]: I1205 15:39:57.818813 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:57 crc kubenswrapper[4858]: I1205 15:39:57.820247 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:39:58 crc kubenswrapper[4858]: I1205 15:39:58.888022 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dg5kj" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:39:58 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:39:58 crc kubenswrapper[4858]: > Dec 05 15:40:08 crc kubenswrapper[4858]: I1205 15:40:08.899352 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:40:08 crc kubenswrapper[4858]: E1205 15:40:08.900159 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:40:09 crc kubenswrapper[4858]: I1205 15:40:08.895305 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dg5kj" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="registry-server" probeResult="failure" output=< Dec 05 15:40:09 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:40:09 crc kubenswrapper[4858]: > Dec 05 15:40:17 crc kubenswrapper[4858]: I1205 15:40:17.869257 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:40:17 crc kubenswrapper[4858]: I1205 15:40:17.921501 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:40:18 crc kubenswrapper[4858]: I1205 15:40:18.683516 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dg5kj"] Dec 05 15:40:18 crc kubenswrapper[4858]: I1205 15:40:18.986191 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dg5kj" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="registry-server" containerID="cri-o://cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42" gracePeriod=2 Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.614060 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.754960 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-catalog-content\") pod \"f51a8bfe-be84-424f-82f0-eac266caec3b\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.755304 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxhm\" (UniqueName: \"kubernetes.io/projected/f51a8bfe-be84-424f-82f0-eac266caec3b-kube-api-access-cwxhm\") pod \"f51a8bfe-be84-424f-82f0-eac266caec3b\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.755474 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-utilities\") pod \"f51a8bfe-be84-424f-82f0-eac266caec3b\" (UID: \"f51a8bfe-be84-424f-82f0-eac266caec3b\") " Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.756572 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-utilities" (OuterVolumeSpecName: "utilities") pod "f51a8bfe-be84-424f-82f0-eac266caec3b" (UID: "f51a8bfe-be84-424f-82f0-eac266caec3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.768106 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51a8bfe-be84-424f-82f0-eac266caec3b-kube-api-access-cwxhm" (OuterVolumeSpecName: "kube-api-access-cwxhm") pod "f51a8bfe-be84-424f-82f0-eac266caec3b" (UID: "f51a8bfe-be84-424f-82f0-eac266caec3b"). InnerVolumeSpecName "kube-api-access-cwxhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.858341 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxhm\" (UniqueName: \"kubernetes.io/projected/f51a8bfe-be84-424f-82f0-eac266caec3b-kube-api-access-cwxhm\") on node \"crc\" DevicePath \"\"" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.858365 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.868057 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f51a8bfe-be84-424f-82f0-eac266caec3b" (UID: "f51a8bfe-be84-424f-82f0-eac266caec3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.961972 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f51a8bfe-be84-424f-82f0-eac266caec3b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.997253 4858 generic.go:334] "Generic (PLEG): container finished" podID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerID="cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42" exitCode=0 Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.997292 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerDied","Data":"cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42"} Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.997318 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg5kj" event={"ID":"f51a8bfe-be84-424f-82f0-eac266caec3b","Type":"ContainerDied","Data":"d3342a60b9c05424b17d656f2b531c7e60655913625b615c09d1f630b5bc3092"} Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.997337 4858 scope.go:117] "RemoveContainer" containerID="cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42" Dec 05 15:40:19 crc kubenswrapper[4858]: I1205 15:40:19.998513 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg5kj" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.028442 4858 scope.go:117] "RemoveContainer" containerID="2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.034841 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dg5kj"] Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.050722 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dg5kj"] Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.055941 4858 scope.go:117] "RemoveContainer" containerID="635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.112099 4858 scope.go:117] "RemoveContainer" containerID="cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42" Dec 05 15:40:20 crc kubenswrapper[4858]: E1205 15:40:20.112630 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42\": container with ID starting with cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42 not found: ID does not exist" containerID="cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.112661 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42"} err="failed to get container status \"cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42\": rpc error: code = NotFound desc = could not find container \"cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42\": container with ID starting with cd9e8b53da6c8a9c2856eb5f96e85535cdece75ae81c632c7c16d131dd22bd42 not found: ID does not exist" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.112686 4858 scope.go:117] "RemoveContainer" containerID="2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db" Dec 05 15:40:20 crc kubenswrapper[4858]: E1205 15:40:20.112919 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db\": container with ID starting with 2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db not found: ID does not exist" containerID="2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.112940 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db"} err="failed to get container status \"2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db\": rpc error: code = NotFound desc = could not find container \"2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db\": container with ID starting with 2be4498cd9ccd4c136c953694c24c03e5acb8a78e2d39411c15727f03353a0db not found: ID does not exist" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.112955 4858 scope.go:117] "RemoveContainer" containerID="635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab" Dec 05 15:40:20 crc kubenswrapper[4858]: E1205 15:40:20.113157 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab\": container with ID starting with 635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab not found: ID does not exist" containerID="635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab" Dec 05 15:40:20 crc kubenswrapper[4858]: I1205 15:40:20.113177 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab"} err="failed to get container status \"635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab\": rpc error: code = NotFound desc = could not find container \"635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab\": container with ID starting with 635d67798414e4a0a4b2d6a3d1e4735f20afc134572c07d9c094770de7b8b9ab not found: ID does not exist" Dec 05 15:40:21 crc kubenswrapper[4858]: I1205 15:40:21.910584 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" path="/var/lib/kubelet/pods/f51a8bfe-be84-424f-82f0-eac266caec3b/volumes" Dec 05 15:40:22 crc kubenswrapper[4858]: I1205 15:40:22.899056 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:40:22 crc kubenswrapper[4858]: E1205 15:40:22.899596 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:40:36 crc kubenswrapper[4858]: I1205 15:40:36.898946 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:40:36 crc kubenswrapper[4858]: E1205 15:40:36.899629 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:40:47 crc kubenswrapper[4858]: I1205 15:40:47.899669 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:40:47 crc kubenswrapper[4858]: E1205 15:40:47.900352 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:41:01 crc kubenswrapper[4858]: I1205 15:41:01.905942 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:41:01 crc kubenswrapper[4858]: E1205 15:41:01.906642 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:41:16 crc kubenswrapper[4858]: I1205 15:41:16.898914 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:41:16 crc kubenswrapper[4858]: E1205 15:41:16.899647 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:41:28 crc kubenswrapper[4858]: I1205 15:41:28.899220 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:41:28 crc kubenswrapper[4858]: E1205 15:41:28.900040 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:41:40 crc kubenswrapper[4858]: I1205 15:41:40.899374 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:41:40 crc kubenswrapper[4858]: E1205 15:41:40.900102 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:41:54 crc kubenswrapper[4858]: I1205 15:41:54.899659 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:41:55 crc kubenswrapper[4858]: I1205 15:41:55.880595 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"b4d4daced1d1942f723c70a0cb67acda50f2ce55b47fb52753891735a6a82032"} Dec 05 15:44:14 crc kubenswrapper[4858]: I1205 15:44:14.759990 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:44:14 crc kubenswrapper[4858]: I1205 15:44:14.760531 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:44:44 crc kubenswrapper[4858]: I1205 15:44:44.760267 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:44:44 crc kubenswrapper[4858]: I1205 15:44:44.760849 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.217761 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr"] Dec 05 15:45:00 crc kubenswrapper[4858]: E1205 15:45:00.218813 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="extract-content" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.218844 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="extract-content" Dec 05 15:45:00 crc kubenswrapper[4858]: E1205 15:45:00.218862 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="registry-server" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.218870 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="registry-server" Dec 05 15:45:00 crc kubenswrapper[4858]: E1205 15:45:00.218887 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="extract-utilities" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.218896 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="extract-utilities" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.219157 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51a8bfe-be84-424f-82f0-eac266caec3b" containerName="registry-server" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.219966 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.230313 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.230314 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.265940 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr"] Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.302218 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7142f7b-2a87-49b5-b3da-a652639e3a83-secret-volume\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.302287 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7142f7b-2a87-49b5-b3da-a652639e3a83-config-volume\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.302315 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn88d\" (UniqueName: \"kubernetes.io/projected/f7142f7b-2a87-49b5-b3da-a652639e3a83-kube-api-access-jn88d\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.403771 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7142f7b-2a87-49b5-b3da-a652639e3a83-secret-volume\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.403902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7142f7b-2a87-49b5-b3da-a652639e3a83-config-volume\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.403932 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn88d\" (UniqueName: \"kubernetes.io/projected/f7142f7b-2a87-49b5-b3da-a652639e3a83-kube-api-access-jn88d\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.405051 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7142f7b-2a87-49b5-b3da-a652639e3a83-config-volume\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.409587 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7142f7b-2a87-49b5-b3da-a652639e3a83-secret-volume\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.422157 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn88d\" (UniqueName: \"kubernetes.io/projected/f7142f7b-2a87-49b5-b3da-a652639e3a83-kube-api-access-jn88d\") pod \"collect-profiles-29415825-2h8xr\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:00 crc kubenswrapper[4858]: I1205 15:45:00.550996 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:01 crc kubenswrapper[4858]: I1205 15:45:01.215537 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr"] Dec 05 15:45:01 crc kubenswrapper[4858]: I1205 15:45:01.524096 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" event={"ID":"f7142f7b-2a87-49b5-b3da-a652639e3a83","Type":"ContainerStarted","Data":"f796f7192608175a37acd673ba0838fa5c3b5e093a168ad0a552cd2a5e3e8492"} Dec 05 15:45:01 crc kubenswrapper[4858]: I1205 15:45:01.524439 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" event={"ID":"f7142f7b-2a87-49b5-b3da-a652639e3a83","Type":"ContainerStarted","Data":"d47e3c1ca6ddabee179b38720fefc6fc7f582b45bbb35101e32e1fa1e9988ec1"} Dec 05 15:45:01 crc kubenswrapper[4858]: I1205 15:45:01.545250 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" podStartSLOduration=1.545232562 podStartE2EDuration="1.545232562s" podCreationTimestamp="2025-12-05 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 15:45:01.542991522 +0000 UTC m=+6510.090589661" watchObservedRunningTime="2025-12-05 15:45:01.545232562 +0000 UTC m=+6510.092830701" Dec 05 15:45:02 crc kubenswrapper[4858]: I1205 15:45:02.544037 4858 generic.go:334] "Generic (PLEG): container finished" podID="f7142f7b-2a87-49b5-b3da-a652639e3a83" containerID="f796f7192608175a37acd673ba0838fa5c3b5e093a168ad0a552cd2a5e3e8492" exitCode=0 Dec 05 15:45:02 crc kubenswrapper[4858]: I1205 15:45:02.544188 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" event={"ID":"f7142f7b-2a87-49b5-b3da-a652639e3a83","Type":"ContainerDied","Data":"f796f7192608175a37acd673ba0838fa5c3b5e093a168ad0a552cd2a5e3e8492"} Dec 05 15:45:03 crc kubenswrapper[4858]: I1205 15:45:03.978315 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.077251 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7142f7b-2a87-49b5-b3da-a652639e3a83-secret-volume\") pod \"f7142f7b-2a87-49b5-b3da-a652639e3a83\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.077353 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn88d\" (UniqueName: \"kubernetes.io/projected/f7142f7b-2a87-49b5-b3da-a652639e3a83-kube-api-access-jn88d\") pod \"f7142f7b-2a87-49b5-b3da-a652639e3a83\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.077507 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7142f7b-2a87-49b5-b3da-a652639e3a83-config-volume\") pod \"f7142f7b-2a87-49b5-b3da-a652639e3a83\" (UID: \"f7142f7b-2a87-49b5-b3da-a652639e3a83\") " Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.078303 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7142f7b-2a87-49b5-b3da-a652639e3a83-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7142f7b-2a87-49b5-b3da-a652639e3a83" (UID: "f7142f7b-2a87-49b5-b3da-a652639e3a83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.078640 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7142f7b-2a87-49b5-b3da-a652639e3a83-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.086395 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7142f7b-2a87-49b5-b3da-a652639e3a83-kube-api-access-jn88d" (OuterVolumeSpecName: "kube-api-access-jn88d") pod "f7142f7b-2a87-49b5-b3da-a652639e3a83" (UID: "f7142f7b-2a87-49b5-b3da-a652639e3a83"). InnerVolumeSpecName "kube-api-access-jn88d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.103378 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7142f7b-2a87-49b5-b3da-a652639e3a83-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7142f7b-2a87-49b5-b3da-a652639e3a83" (UID: "f7142f7b-2a87-49b5-b3da-a652639e3a83"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.180989 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7142f7b-2a87-49b5-b3da-a652639e3a83-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.181025 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn88d\" (UniqueName: \"kubernetes.io/projected/f7142f7b-2a87-49b5-b3da-a652639e3a83-kube-api-access-jn88d\") on node \"crc\" DevicePath \"\"" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.282586 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr"] Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.292804 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415780-snbhr"] Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.566036 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" event={"ID":"f7142f7b-2a87-49b5-b3da-a652639e3a83","Type":"ContainerDied","Data":"d47e3c1ca6ddabee179b38720fefc6fc7f582b45bbb35101e32e1fa1e9988ec1"} Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.566354 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr" Dec 05 15:45:04 crc kubenswrapper[4858]: I1205 15:45:04.566081 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47e3c1ca6ddabee179b38720fefc6fc7f582b45bbb35101e32e1fa1e9988ec1" Dec 05 15:45:05 crc kubenswrapper[4858]: I1205 15:45:05.921153 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f" path="/var/lib/kubelet/pods/2a99c566-4e47-47b9-a7aa-a41bc1d3bc2f/volumes" Dec 05 15:45:14 crc kubenswrapper[4858]: I1205 15:45:14.760071 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:45:14 crc kubenswrapper[4858]: I1205 15:45:14.760480 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:45:14 crc kubenswrapper[4858]: I1205 15:45:14.760518 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:45:14 crc kubenswrapper[4858]: I1205 15:45:14.761358 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4d4daced1d1942f723c70a0cb67acda50f2ce55b47fb52753891735a6a82032"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:45:14 crc kubenswrapper[4858]: I1205 15:45:14.761429 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://b4d4daced1d1942f723c70a0cb67acda50f2ce55b47fb52753891735a6a82032" gracePeriod=600 Dec 05 15:45:15 crc kubenswrapper[4858]: I1205 15:45:15.680325 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="b4d4daced1d1942f723c70a0cb67acda50f2ce55b47fb52753891735a6a82032" exitCode=0 Dec 05 15:45:15 crc kubenswrapper[4858]: I1205 15:45:15.680395 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"b4d4daced1d1942f723c70a0cb67acda50f2ce55b47fb52753891735a6a82032"} Dec 05 15:45:15 crc kubenswrapper[4858]: I1205 15:45:15.680793 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36"} Dec 05 15:45:15 crc kubenswrapper[4858]: I1205 15:45:15.680816 4858 scope.go:117] "RemoveContainer" containerID="1c9174cfb7bd95b591d0b442ff97a9a90ba6a581c8639ed8c646525217aad922" Dec 05 15:46:02 crc kubenswrapper[4858]: I1205 15:46:02.528314 4858 scope.go:117] "RemoveContainer" containerID="2747d4b8f335fe2bb964f08e33e1c187675b7052bb80a92837e6e0adbf195c1a" Dec 05 15:47:44 crc kubenswrapper[4858]: I1205 15:47:44.760408 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:47:44 crc kubenswrapper[4858]: I1205 15:47:44.760788 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.296138 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zh6bl"] Dec 05 15:47:49 crc kubenswrapper[4858]: E1205 15:47:49.297227 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7142f7b-2a87-49b5-b3da-a652639e3a83" containerName="collect-profiles" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.297244 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7142f7b-2a87-49b5-b3da-a652639e3a83" containerName="collect-profiles" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.297552 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7142f7b-2a87-49b5-b3da-a652639e3a83" containerName="collect-profiles" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.299338 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.321244 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zh6bl"] Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.418470 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-catalog-content\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.418615 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-utilities\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.418656 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr8ds\" (UniqueName: \"kubernetes.io/projected/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-kube-api-access-wr8ds\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.497197 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2mzl4"] Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.499800 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.513459 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mzl4"] Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.520281 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-catalog-content\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.520330 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-utilities\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.520427 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr8ds\" (UniqueName: \"kubernetes.io/projected/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-kube-api-access-wr8ds\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.521418 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-catalog-content\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.521715 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-utilities\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.617019 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr8ds\" (UniqueName: \"kubernetes.io/projected/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-kube-api-access-wr8ds\") pod \"certified-operators-zh6bl\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.622563 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-catalog-content\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.622733 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-utilities\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.622929 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wf6h\" (UniqueName: \"kubernetes.io/projected/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-kube-api-access-9wf6h\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.725342 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-catalog-content\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.725450 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-utilities\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.725695 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wf6h\" (UniqueName: \"kubernetes.io/projected/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-kube-api-access-9wf6h\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.726267 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-catalog-content\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.726409 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-utilities\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.746686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wf6h\" (UniqueName: \"kubernetes.io/projected/ab1294ca-84e6-4429-acdf-9cc33f0ebfc0-kube-api-access-9wf6h\") pod \"community-operators-2mzl4\" (UID: \"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0\") " pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.868834 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:47:49 crc kubenswrapper[4858]: I1205 15:47:49.916866 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:50 crc kubenswrapper[4858]: I1205 15:47:50.423892 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mzl4"] Dec 05 15:47:50 crc kubenswrapper[4858]: I1205 15:47:50.549386 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zh6bl"] Dec 05 15:47:50 crc kubenswrapper[4858]: W1205 15:47:50.554064 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdc6a236_f8e1_4707_85a9_ea0534eb6de7.slice/crio-0fde2bbd2ae3b23975daa759307dceab23546f09a8ef19bbcef5a947b654f82a WatchSource:0}: Error finding container 0fde2bbd2ae3b23975daa759307dceab23546f09a8ef19bbcef5a947b654f82a: Status 404 returned error can't find the container with id 0fde2bbd2ae3b23975daa759307dceab23546f09a8ef19bbcef5a947b654f82a Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.074150 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab1294ca-84e6-4429-acdf-9cc33f0ebfc0" containerID="2268723168a9c6dcbaa646ff77e62fb54d9f780ee7033b0b36722803dd8077b0" exitCode=0 Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.074329 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mzl4" event={"ID":"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0","Type":"ContainerDied","Data":"2268723168a9c6dcbaa646ff77e62fb54d9f780ee7033b0b36722803dd8077b0"} Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.074485 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mzl4" event={"ID":"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0","Type":"ContainerStarted","Data":"9d1eded1017cbcd34d3bab11b15f54d64fff3a9879a52b31f7e94c01f0ef7e3c"} Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.078068 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.078223 4858 generic.go:334] "Generic (PLEG): container finished" podID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerID="e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822" exitCode=0 Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.078261 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerDied","Data":"e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822"} Dec 05 15:47:51 crc kubenswrapper[4858]: I1205 15:47:51.078286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerStarted","Data":"0fde2bbd2ae3b23975daa759307dceab23546f09a8ef19bbcef5a947b654f82a"} Dec 05 15:47:53 crc kubenswrapper[4858]: I1205 15:47:53.768840 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerStarted","Data":"98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9"} Dec 05 15:47:55 crc kubenswrapper[4858]: I1205 15:47:55.790688 4858 generic.go:334] "Generic (PLEG): container finished" podID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerID="98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9" exitCode=0 Dec 05 15:47:55 crc kubenswrapper[4858]: I1205 15:47:55.790756 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerDied","Data":"98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9"} Dec 05 15:47:59 crc kubenswrapper[4858]: I1205 15:47:59.834084 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerStarted","Data":"8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4"} Dec 05 15:47:59 crc kubenswrapper[4858]: I1205 15:47:59.835695 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mzl4" event={"ID":"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0","Type":"ContainerStarted","Data":"44568cdb9fdd042734078687b9eb46609a9fbcd4dc32adcd7c8f6011ef5020e1"} Dec 05 15:47:59 crc kubenswrapper[4858]: I1205 15:47:59.883414 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zh6bl" podStartSLOduration=2.636741007 podStartE2EDuration="10.883395911s" podCreationTimestamp="2025-12-05 15:47:49 +0000 UTC" firstStartedPulling="2025-12-05 15:47:51.07972315 +0000 UTC m=+6679.627321289" lastFinishedPulling="2025-12-05 15:47:59.326378054 +0000 UTC m=+6687.873976193" observedRunningTime="2025-12-05 15:47:59.873817652 +0000 UTC m=+6688.421415791" watchObservedRunningTime="2025-12-05 15:47:59.883395911 +0000 UTC m=+6688.430994050" Dec 05 15:47:59 crc kubenswrapper[4858]: I1205 15:47:59.917215 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:47:59 crc kubenswrapper[4858]: I1205 15:47:59.917275 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:48:00 crc kubenswrapper[4858]: I1205 15:48:00.977383 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zh6bl" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="registry-server" probeResult="failure" output=< Dec 05 15:48:00 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:48:00 crc kubenswrapper[4858]: > Dec 05 15:48:02 crc kubenswrapper[4858]: I1205 15:48:02.862677 4858 generic.go:334] "Generic (PLEG): container finished" podID="ab1294ca-84e6-4429-acdf-9cc33f0ebfc0" containerID="44568cdb9fdd042734078687b9eb46609a9fbcd4dc32adcd7c8f6011ef5020e1" exitCode=0 Dec 05 15:48:02 crc kubenswrapper[4858]: I1205 15:48:02.862775 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mzl4" event={"ID":"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0","Type":"ContainerDied","Data":"44568cdb9fdd042734078687b9eb46609a9fbcd4dc32adcd7c8f6011ef5020e1"} Dec 05 15:48:03 crc kubenswrapper[4858]: I1205 15:48:03.876021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2mzl4" event={"ID":"ab1294ca-84e6-4429-acdf-9cc33f0ebfc0","Type":"ContainerStarted","Data":"3ae8c4334f6dbcdc0a9f9667e103b06d39eaa9f89c1a8386c9587989f2ad8901"} Dec 05 15:48:03 crc kubenswrapper[4858]: I1205 15:48:03.897448 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2mzl4" podStartSLOduration=2.617806384 podStartE2EDuration="14.897429557s" podCreationTimestamp="2025-12-05 15:47:49 +0000 UTC" firstStartedPulling="2025-12-05 15:47:51.077428877 +0000 UTC m=+6679.625027016" lastFinishedPulling="2025-12-05 15:48:03.35705205 +0000 UTC m=+6691.904650189" observedRunningTime="2025-12-05 15:48:03.896445991 +0000 UTC m=+6692.444044130" watchObservedRunningTime="2025-12-05 15:48:03.897429557 +0000 UTC m=+6692.445027696" Dec 05 15:48:09 crc kubenswrapper[4858]: I1205 15:48:09.869757 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:48:09 crc kubenswrapper[4858]: I1205 15:48:09.871507 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:48:09 crc kubenswrapper[4858]: I1205 15:48:09.931571 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:48:09 crc kubenswrapper[4858]: I1205 15:48:09.974078 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:48:10 crc kubenswrapper[4858]: I1205 15:48:10.023452 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:48:10 crc kubenswrapper[4858]: I1205 15:48:10.984910 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2mzl4" Dec 05 15:48:11 crc kubenswrapper[4858]: I1205 15:48:11.960669 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zh6bl"] Dec 05 15:48:11 crc kubenswrapper[4858]: I1205 15:48:11.961125 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zh6bl" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="registry-server" containerID="cri-o://8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4" gracePeriod=2 Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.481304 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.583945 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2mzl4"] Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.655221 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-utilities\") pod \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.655269 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr8ds\" (UniqueName: \"kubernetes.io/projected/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-kube-api-access-wr8ds\") pod \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.655365 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-catalog-content\") pod \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\" (UID: \"bdc6a236-f8e1-4707-85a9-ea0534eb6de7\") " Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.659587 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-utilities" (OuterVolumeSpecName: "utilities") pod "bdc6a236-f8e1-4707-85a9-ea0534eb6de7" (UID: "bdc6a236-f8e1-4707-85a9-ea0534eb6de7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.662252 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-kube-api-access-wr8ds" (OuterVolumeSpecName: "kube-api-access-wr8ds") pod "bdc6a236-f8e1-4707-85a9-ea0534eb6de7" (UID: "bdc6a236-f8e1-4707-85a9-ea0534eb6de7"). InnerVolumeSpecName "kube-api-access-wr8ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.708389 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdc6a236-f8e1-4707-85a9-ea0534eb6de7" (UID: "bdc6a236-f8e1-4707-85a9-ea0534eb6de7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.758087 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.758152 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.758165 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr8ds\" (UniqueName: \"kubernetes.io/projected/bdc6a236-f8e1-4707-85a9-ea0534eb6de7-kube-api-access-wr8ds\") on node \"crc\" DevicePath \"\"" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.956818 4858 generic.go:334] "Generic (PLEG): container finished" podID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerID="8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4" exitCode=0 Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.958213 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zh6bl" Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.964002 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerDied","Data":"8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4"} Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.964204 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhrc4"] Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.964278 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zh6bl" event={"ID":"bdc6a236-f8e1-4707-85a9-ea0534eb6de7","Type":"ContainerDied","Data":"0fde2bbd2ae3b23975daa759307dceab23546f09a8ef19bbcef5a947b654f82a"} Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.964518 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhrc4" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" containerID="cri-o://db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e" gracePeriod=2 Dec 05 15:48:12 crc kubenswrapper[4858]: I1205 15:48:12.964681 4858 scope.go:117] "RemoveContainer" containerID="8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.002336 4858 scope.go:117] "RemoveContainer" containerID="98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.034564 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zh6bl"] Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.043093 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zh6bl"] Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.053999 4858 scope.go:117] "RemoveContainer" containerID="e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.127575 4858 scope.go:117] "RemoveContainer" containerID="8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4" Dec 05 15:48:13 crc kubenswrapper[4858]: E1205 15:48:13.128564 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4\": container with ID starting with 8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4 not found: ID does not exist" containerID="8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.128603 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4"} err="failed to get container status \"8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4\": rpc error: code = NotFound desc = could not find container \"8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4\": container with ID starting with 8d77f1b5e021123d61527db7aad70a8e9843794f50a758ff4118a98a247edbf4 not found: ID does not exist" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.128623 4858 scope.go:117] "RemoveContainer" containerID="98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9" Dec 05 15:48:13 crc kubenswrapper[4858]: E1205 15:48:13.129334 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9\": container with ID starting with 98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9 not found: ID does not exist" containerID="98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.129358 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9"} err="failed to get container status \"98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9\": rpc error: code = NotFound desc = could not find container \"98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9\": container with ID starting with 98588af2fad1e0fa3b27cca09566cc9c419bceae8bf6e9aee431e5019e21b0c9 not found: ID does not exist" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.129372 4858 scope.go:117] "RemoveContainer" containerID="e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822" Dec 05 15:48:13 crc kubenswrapper[4858]: E1205 15:48:13.132299 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822\": container with ID starting with e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822 not found: ID does not exist" containerID="e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.132324 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822"} err="failed to get container status \"e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822\": rpc error: code = NotFound desc = could not find container \"e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822\": container with ID starting with e17cd0736bb51c3c4ade83b5ef987a1877c16c45ff7f76ac6f50cb71a0bd8822 not found: ID does not exist" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.548048 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.681611 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-catalog-content\") pod \"67328f86-d148-42b9-b5e0-29d1aa422b03\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.681776 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs4sc\" (UniqueName: \"kubernetes.io/projected/67328f86-d148-42b9-b5e0-29d1aa422b03-kube-api-access-gs4sc\") pod \"67328f86-d148-42b9-b5e0-29d1aa422b03\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.681897 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-utilities\") pod \"67328f86-d148-42b9-b5e0-29d1aa422b03\" (UID: \"67328f86-d148-42b9-b5e0-29d1aa422b03\") " Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.682806 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-utilities" (OuterVolumeSpecName: "utilities") pod "67328f86-d148-42b9-b5e0-29d1aa422b03" (UID: "67328f86-d148-42b9-b5e0-29d1aa422b03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.705370 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67328f86-d148-42b9-b5e0-29d1aa422b03-kube-api-access-gs4sc" (OuterVolumeSpecName: "kube-api-access-gs4sc") pod "67328f86-d148-42b9-b5e0-29d1aa422b03" (UID: "67328f86-d148-42b9-b5e0-29d1aa422b03"). InnerVolumeSpecName "kube-api-access-gs4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.744911 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67328f86-d148-42b9-b5e0-29d1aa422b03" (UID: "67328f86-d148-42b9-b5e0-29d1aa422b03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.784365 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.784400 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs4sc\" (UniqueName: \"kubernetes.io/projected/67328f86-d148-42b9-b5e0-29d1aa422b03-kube-api-access-gs4sc\") on node \"crc\" DevicePath \"\"" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.784411 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67328f86-d148-42b9-b5e0-29d1aa422b03-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.911274 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" path="/var/lib/kubelet/pods/bdc6a236-f8e1-4707-85a9-ea0534eb6de7/volumes" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.967692 4858 generic.go:334] "Generic (PLEG): container finished" podID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerID="db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e" exitCode=0 Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.967752 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhrc4" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.967780 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerDied","Data":"db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e"} Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.968089 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhrc4" event={"ID":"67328f86-d148-42b9-b5e0-29d1aa422b03","Type":"ContainerDied","Data":"97a3c76ab2591979031e192ae789e83de090ee3915403a48dcd27f4e64a5ec95"} Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.968109 4858 scope.go:117] "RemoveContainer" containerID="db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.993490 4858 scope.go:117] "RemoveContainer" containerID="0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384" Dec 05 15:48:13 crc kubenswrapper[4858]: I1205 15:48:13.996720 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhrc4"] Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.007160 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhrc4"] Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.018218 4858 scope.go:117] "RemoveContainer" containerID="b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.039802 4858 scope.go:117] "RemoveContainer" containerID="db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e" Dec 05 15:48:14 crc kubenswrapper[4858]: E1205 15:48:14.040260 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e\": container with ID starting with db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e not found: ID does not exist" containerID="db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.040290 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e"} err="failed to get container status \"db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e\": rpc error: code = NotFound desc = could not find container \"db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e\": container with ID starting with db303ba11b3019090b86f17d267f65044648f3f1606ca2b92a769119f8ecb25e not found: ID does not exist" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.040309 4858 scope.go:117] "RemoveContainer" containerID="0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384" Dec 05 15:48:14 crc kubenswrapper[4858]: E1205 15:48:14.040587 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384\": container with ID starting with 0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384 not found: ID does not exist" containerID="0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.040615 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384"} err="failed to get container status \"0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384\": rpc error: code = NotFound desc = could not find container \"0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384\": container with ID starting with 0974a99e96bc92efef0483998a8eec7320aed69888ca3e74c3b432ae2f9c2384 not found: ID does not exist" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.040640 4858 scope.go:117] "RemoveContainer" containerID="b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e" Dec 05 15:48:14 crc kubenswrapper[4858]: E1205 15:48:14.040901 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e\": container with ID starting with b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e not found: ID does not exist" containerID="b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.040929 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e"} err="failed to get container status \"b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e\": rpc error: code = NotFound desc = could not find container \"b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e\": container with ID starting with b7abd96e386e95b064a04bb84ce4ceb0324d49458528e77fc49dc4e965a6239e not found: ID does not exist" Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.760466 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:48:14 crc kubenswrapper[4858]: I1205 15:48:14.760539 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:48:15 crc kubenswrapper[4858]: I1205 15:48:15.927701 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" path="/var/lib/kubelet/pods/67328f86-d148-42b9-b5e0-29d1aa422b03/volumes" Dec 05 15:48:44 crc kubenswrapper[4858]: I1205 15:48:44.760817 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:48:44 crc kubenswrapper[4858]: I1205 15:48:44.762019 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:48:44 crc kubenswrapper[4858]: I1205 15:48:44.762104 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:48:44 crc kubenswrapper[4858]: I1205 15:48:44.763538 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:48:44 crc kubenswrapper[4858]: I1205 15:48:44.763616 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" gracePeriod=600 Dec 05 15:48:44 crc kubenswrapper[4858]: E1205 15:48:44.887532 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:48:45 crc kubenswrapper[4858]: I1205 15:48:45.262470 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" exitCode=0 Dec 05 15:48:45 crc kubenswrapper[4858]: I1205 15:48:45.262546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36"} Dec 05 15:48:45 crc kubenswrapper[4858]: I1205 15:48:45.262787 4858 scope.go:117] "RemoveContainer" containerID="b4d4daced1d1942f723c70a0cb67acda50f2ce55b47fb52753891735a6a82032" Dec 05 15:48:45 crc kubenswrapper[4858]: I1205 15:48:45.263470 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:48:45 crc kubenswrapper[4858]: E1205 15:48:45.263774 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:48:58 crc kubenswrapper[4858]: I1205 15:48:58.899267 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:48:58 crc kubenswrapper[4858]: E1205 15:48:58.900003 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:49:12 crc kubenswrapper[4858]: I1205 15:49:12.899890 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:49:12 crc kubenswrapper[4858]: E1205 15:49:12.900664 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:49:25 crc kubenswrapper[4858]: I1205 15:49:25.899100 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:49:25 crc kubenswrapper[4858]: E1205 15:49:25.899957 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:49:39 crc kubenswrapper[4858]: I1205 15:49:39.900016 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:49:39 crc kubenswrapper[4858]: E1205 15:49:39.900780 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:49:54 crc kubenswrapper[4858]: I1205 15:49:54.899086 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:49:54 crc kubenswrapper[4858]: E1205 15:49:54.899729 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:50:08 crc kubenswrapper[4858]: I1205 15:50:08.899519 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:50:08 crc kubenswrapper[4858]: E1205 15:50:08.900307 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.223970 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jj9d4"] Dec 05 15:50:12 crc kubenswrapper[4858]: E1205 15:50:12.224681 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="extract-utilities" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.224699 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="extract-utilities" Dec 05 15:50:12 crc kubenswrapper[4858]: E1205 15:50:12.224728 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="extract-content" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.224758 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="extract-content" Dec 05 15:50:12 crc kubenswrapper[4858]: E1205 15:50:12.224774 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="extract-utilities" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.224782 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="extract-utilities" Dec 05 15:50:12 crc kubenswrapper[4858]: E1205 15:50:12.224793 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.224800 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" Dec 05 15:50:12 crc kubenswrapper[4858]: E1205 15:50:12.225900 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="registry-server" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.225920 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="registry-server" Dec 05 15:50:12 crc kubenswrapper[4858]: E1205 15:50:12.225947 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="extract-content" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.225956 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="extract-content" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.226193 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc6a236-f8e1-4707-85a9-ea0534eb6de7" containerName="registry-server" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.226220 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="67328f86-d148-42b9-b5e0-29d1aa422b03" containerName="registry-server" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.227586 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.251268 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jj9d4"] Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.382802 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-utilities\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.382867 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kq9w\" (UniqueName: \"kubernetes.io/projected/4201f234-246b-4cfc-9aa8-7d38e0dee351-kube-api-access-7kq9w\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.382902 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-catalog-content\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.484676 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-catalog-content\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.485030 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-utilities\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.485092 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kq9w\" (UniqueName: \"kubernetes.io/projected/4201f234-246b-4cfc-9aa8-7d38e0dee351-kube-api-access-7kq9w\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.485608 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-catalog-content\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.485645 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-utilities\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.511595 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kq9w\" (UniqueName: \"kubernetes.io/projected/4201f234-246b-4cfc-9aa8-7d38e0dee351-kube-api-access-7kq9w\") pod \"redhat-operators-jj9d4\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:12 crc kubenswrapper[4858]: I1205 15:50:12.547695 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:13 crc kubenswrapper[4858]: I1205 15:50:13.102230 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jj9d4"] Dec 05 15:50:13 crc kubenswrapper[4858]: I1205 15:50:13.155989 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerStarted","Data":"9e23137d018571496c018fe8c25003b386a2502f3be545c38bd42c3b04fbac0d"} Dec 05 15:50:14 crc kubenswrapper[4858]: I1205 15:50:14.168464 4858 generic.go:334] "Generic (PLEG): container finished" podID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerID="41dfd0927a3f7631d93dded97112bbef9e6b44ba10e898392f94f9b56fd3a72f" exitCode=0 Dec 05 15:50:14 crc kubenswrapper[4858]: I1205 15:50:14.168564 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerDied","Data":"41dfd0927a3f7631d93dded97112bbef9e6b44ba10e898392f94f9b56fd3a72f"} Dec 05 15:50:15 crc kubenswrapper[4858]: I1205 15:50:15.185566 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerStarted","Data":"0f8552d81468174597b4a03f6421032391d755937bcbea1a4aeb0e8b558bb103"} Dec 05 15:50:18 crc kubenswrapper[4858]: I1205 15:50:18.215158 4858 generic.go:334] "Generic (PLEG): container finished" podID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerID="0f8552d81468174597b4a03f6421032391d755937bcbea1a4aeb0e8b558bb103" exitCode=0 Dec 05 15:50:18 crc kubenswrapper[4858]: I1205 15:50:18.215241 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerDied","Data":"0f8552d81468174597b4a03f6421032391d755937bcbea1a4aeb0e8b558bb103"} Dec 05 15:50:19 crc kubenswrapper[4858]: I1205 15:50:19.227872 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerStarted","Data":"cd42c5afb6e17aa964fe2f311e5cbd0f69cfaea5e637c6cb76001ffe2513721c"} Dec 05 15:50:19 crc kubenswrapper[4858]: I1205 15:50:19.251276 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jj9d4" podStartSLOduration=2.754844137 podStartE2EDuration="7.251253782s" podCreationTimestamp="2025-12-05 15:50:12 +0000 UTC" firstStartedPulling="2025-12-05 15:50:14.17063697 +0000 UTC m=+6822.718235109" lastFinishedPulling="2025-12-05 15:50:18.667046615 +0000 UTC m=+6827.214644754" observedRunningTime="2025-12-05 15:50:19.244327535 +0000 UTC m=+6827.791925684" watchObservedRunningTime="2025-12-05 15:50:19.251253782 +0000 UTC m=+6827.798851921" Dec 05 15:50:19 crc kubenswrapper[4858]: I1205 15:50:19.901664 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:50:19 crc kubenswrapper[4858]: E1205 15:50:19.901993 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:50:22 crc kubenswrapper[4858]: I1205 15:50:22.547942 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:22 crc kubenswrapper[4858]: I1205 15:50:22.548540 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:23 crc kubenswrapper[4858]: I1205 15:50:23.593497 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jj9d4" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="registry-server" probeResult="failure" output=< Dec 05 15:50:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 15:50:23 crc kubenswrapper[4858]: > Dec 05 15:50:30 crc kubenswrapper[4858]: I1205 15:50:30.899015 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:50:30 crc kubenswrapper[4858]: E1205 15:50:30.900611 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:50:32 crc kubenswrapper[4858]: I1205 15:50:32.609109 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:32 crc kubenswrapper[4858]: I1205 15:50:32.663397 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:32 crc kubenswrapper[4858]: I1205 15:50:32.842580 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jj9d4"] Dec 05 15:50:34 crc kubenswrapper[4858]: I1205 15:50:34.356879 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jj9d4" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="registry-server" containerID="cri-o://cd42c5afb6e17aa964fe2f311e5cbd0f69cfaea5e637c6cb76001ffe2513721c" gracePeriod=2 Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.369522 4858 generic.go:334] "Generic (PLEG): container finished" podID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerID="cd42c5afb6e17aa964fe2f311e5cbd0f69cfaea5e637c6cb76001ffe2513721c" exitCode=0 Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.371845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerDied","Data":"cd42c5afb6e17aa964fe2f311e5cbd0f69cfaea5e637c6cb76001ffe2513721c"} Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.373383 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj9d4" event={"ID":"4201f234-246b-4cfc-9aa8-7d38e0dee351","Type":"ContainerDied","Data":"9e23137d018571496c018fe8c25003b386a2502f3be545c38bd42c3b04fbac0d"} Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.373475 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e23137d018571496c018fe8c25003b386a2502f3be545c38bd42c3b04fbac0d" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.443473 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.539782 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-catalog-content\") pod \"4201f234-246b-4cfc-9aa8-7d38e0dee351\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.539966 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kq9w\" (UniqueName: \"kubernetes.io/projected/4201f234-246b-4cfc-9aa8-7d38e0dee351-kube-api-access-7kq9w\") pod \"4201f234-246b-4cfc-9aa8-7d38e0dee351\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.540035 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-utilities\") pod \"4201f234-246b-4cfc-9aa8-7d38e0dee351\" (UID: \"4201f234-246b-4cfc-9aa8-7d38e0dee351\") " Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.541260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-utilities" (OuterVolumeSpecName: "utilities") pod "4201f234-246b-4cfc-9aa8-7d38e0dee351" (UID: "4201f234-246b-4cfc-9aa8-7d38e0dee351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.547882 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4201f234-246b-4cfc-9aa8-7d38e0dee351-kube-api-access-7kq9w" (OuterVolumeSpecName: "kube-api-access-7kq9w") pod "4201f234-246b-4cfc-9aa8-7d38e0dee351" (UID: "4201f234-246b-4cfc-9aa8-7d38e0dee351"). InnerVolumeSpecName "kube-api-access-7kq9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.642104 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.642141 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kq9w\" (UniqueName: \"kubernetes.io/projected/4201f234-246b-4cfc-9aa8-7d38e0dee351-kube-api-access-7kq9w\") on node \"crc\" DevicePath \"\"" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.654633 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4201f234-246b-4cfc-9aa8-7d38e0dee351" (UID: "4201f234-246b-4cfc-9aa8-7d38e0dee351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:50:35 crc kubenswrapper[4858]: I1205 15:50:35.743864 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4201f234-246b-4cfc-9aa8-7d38e0dee351-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:50:36 crc kubenswrapper[4858]: I1205 15:50:36.379267 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj9d4" Dec 05 15:50:36 crc kubenswrapper[4858]: I1205 15:50:36.401554 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jj9d4"] Dec 05 15:50:36 crc kubenswrapper[4858]: I1205 15:50:36.410056 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jj9d4"] Dec 05 15:50:37 crc kubenswrapper[4858]: I1205 15:50:37.909071 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" path="/var/lib/kubelet/pods/4201f234-246b-4cfc-9aa8-7d38e0dee351/volumes" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.252918 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5gq2t"] Dec 05 15:50:38 crc kubenswrapper[4858]: E1205 15:50:38.253327 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="extract-content" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.253340 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="extract-content" Dec 05 15:50:38 crc kubenswrapper[4858]: E1205 15:50:38.253353 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="registry-server" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.253360 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="registry-server" Dec 05 15:50:38 crc kubenswrapper[4858]: E1205 15:50:38.253373 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="extract-utilities" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.253379 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="extract-utilities" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.253571 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="4201f234-246b-4cfc-9aa8-7d38e0dee351" containerName="registry-server" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.255019 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.276159 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gq2t"] Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.289704 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-catalog-content\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.289796 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-utilities\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.289981 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zz4t\" (UniqueName: \"kubernetes.io/projected/1c5ce72b-3ede-48b0-bbf9-095f141e8935-kube-api-access-6zz4t\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.391268 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zz4t\" (UniqueName: \"kubernetes.io/projected/1c5ce72b-3ede-48b0-bbf9-095f141e8935-kube-api-access-6zz4t\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.391355 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-catalog-content\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.391471 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-utilities\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.392007 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-utilities\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.392309 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-catalog-content\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.421873 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zz4t\" (UniqueName: \"kubernetes.io/projected/1c5ce72b-3ede-48b0-bbf9-095f141e8935-kube-api-access-6zz4t\") pod \"redhat-marketplace-5gq2t\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:38 crc kubenswrapper[4858]: I1205 15:50:38.575629 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:39 crc kubenswrapper[4858]: I1205 15:50:39.174263 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gq2t"] Dec 05 15:50:39 crc kubenswrapper[4858]: I1205 15:50:39.405526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gq2t" event={"ID":"1c5ce72b-3ede-48b0-bbf9-095f141e8935","Type":"ContainerStarted","Data":"c0e47afee48a52e05f8bdf1c9e298743c5067974fcba77ac464585a0ea8bb194"} Dec 05 15:50:40 crc kubenswrapper[4858]: I1205 15:50:40.415539 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerID="3abf1d71723bdefad11b721defa3fc8e77a79501fff80c476e45ecc8325fa903" exitCode=0 Dec 05 15:50:40 crc kubenswrapper[4858]: I1205 15:50:40.415721 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gq2t" event={"ID":"1c5ce72b-3ede-48b0-bbf9-095f141e8935","Type":"ContainerDied","Data":"3abf1d71723bdefad11b721defa3fc8e77a79501fff80c476e45ecc8325fa903"} Dec 05 15:50:42 crc kubenswrapper[4858]: I1205 15:50:42.899934 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:50:42 crc kubenswrapper[4858]: E1205 15:50:42.900914 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:50:44 crc kubenswrapper[4858]: I1205 15:50:44.453321 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerID="a8f2ed608d6be839138355bb6f61a3ef5cce9685eeee94d7a51c59e0ba489f59" exitCode=0 Dec 05 15:50:44 crc kubenswrapper[4858]: I1205 15:50:44.453381 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gq2t" event={"ID":"1c5ce72b-3ede-48b0-bbf9-095f141e8935","Type":"ContainerDied","Data":"a8f2ed608d6be839138355bb6f61a3ef5cce9685eeee94d7a51c59e0ba489f59"} Dec 05 15:50:47 crc kubenswrapper[4858]: I1205 15:50:47.484325 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gq2t" event={"ID":"1c5ce72b-3ede-48b0-bbf9-095f141e8935","Type":"ContainerStarted","Data":"ad922159e1098833ae3db8640031819e0eaa3f9625a2f7bd8100f63cb6a5be0d"} Dec 05 15:50:47 crc kubenswrapper[4858]: I1205 15:50:47.505289 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5gq2t" podStartSLOduration=3.214408169 podStartE2EDuration="9.505271336s" podCreationTimestamp="2025-12-05 15:50:38 +0000 UTC" firstStartedPulling="2025-12-05 15:50:40.417752065 +0000 UTC m=+6848.965350204" lastFinishedPulling="2025-12-05 15:50:46.708615232 +0000 UTC m=+6855.256213371" observedRunningTime="2025-12-05 15:50:47.503053466 +0000 UTC m=+6856.050651605" watchObservedRunningTime="2025-12-05 15:50:47.505271336 +0000 UTC m=+6856.052869465" Dec 05 15:50:48 crc kubenswrapper[4858]: I1205 15:50:48.576682 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:48 crc kubenswrapper[4858]: I1205 15:50:48.578342 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:48 crc kubenswrapper[4858]: I1205 15:50:48.632461 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:53 crc kubenswrapper[4858]: I1205 15:50:53.898907 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:50:53 crc kubenswrapper[4858]: E1205 15:50:53.899543 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:50:58 crc kubenswrapper[4858]: I1205 15:50:58.635938 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:50:58 crc kubenswrapper[4858]: I1205 15:50:58.691889 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gq2t"] Dec 05 15:50:59 crc kubenswrapper[4858]: I1205 15:50:59.611254 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5gq2t" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="registry-server" containerID="cri-o://ad922159e1098833ae3db8640031819e0eaa3f9625a2f7bd8100f63cb6a5be0d" gracePeriod=2 Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.621657 4858 generic.go:334] "Generic (PLEG): container finished" podID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerID="ad922159e1098833ae3db8640031819e0eaa3f9625a2f7bd8100f63cb6a5be0d" exitCode=0 Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.622238 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gq2t" event={"ID":"1c5ce72b-3ede-48b0-bbf9-095f141e8935","Type":"ContainerDied","Data":"ad922159e1098833ae3db8640031819e0eaa3f9625a2f7bd8100f63cb6a5be0d"} Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.622279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5gq2t" event={"ID":"1c5ce72b-3ede-48b0-bbf9-095f141e8935","Type":"ContainerDied","Data":"c0e47afee48a52e05f8bdf1c9e298743c5067974fcba77ac464585a0ea8bb194"} Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.622292 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e47afee48a52e05f8bdf1c9e298743c5067974fcba77ac464585a0ea8bb194" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.630537 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.722806 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-catalog-content\") pod \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.723124 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-utilities\") pod \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.723334 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zz4t\" (UniqueName: \"kubernetes.io/projected/1c5ce72b-3ede-48b0-bbf9-095f141e8935-kube-api-access-6zz4t\") pod \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\" (UID: \"1c5ce72b-3ede-48b0-bbf9-095f141e8935\") " Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.723946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-utilities" (OuterVolumeSpecName: "utilities") pod "1c5ce72b-3ede-48b0-bbf9-095f141e8935" (UID: "1c5ce72b-3ede-48b0-bbf9-095f141e8935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.724647 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.728946 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5ce72b-3ede-48b0-bbf9-095f141e8935-kube-api-access-6zz4t" (OuterVolumeSpecName: "kube-api-access-6zz4t") pod "1c5ce72b-3ede-48b0-bbf9-095f141e8935" (UID: "1c5ce72b-3ede-48b0-bbf9-095f141e8935"). InnerVolumeSpecName "kube-api-access-6zz4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.744240 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c5ce72b-3ede-48b0-bbf9-095f141e8935" (UID: "1c5ce72b-3ede-48b0-bbf9-095f141e8935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.826885 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5ce72b-3ede-48b0-bbf9-095f141e8935-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 15:51:00 crc kubenswrapper[4858]: I1205 15:51:00.826915 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zz4t\" (UniqueName: \"kubernetes.io/projected/1c5ce72b-3ede-48b0-bbf9-095f141e8935-kube-api-access-6zz4t\") on node \"crc\" DevicePath \"\"" Dec 05 15:51:01 crc kubenswrapper[4858]: I1205 15:51:01.632355 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5gq2t" Dec 05 15:51:01 crc kubenswrapper[4858]: I1205 15:51:01.675724 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gq2t"] Dec 05 15:51:01 crc kubenswrapper[4858]: I1205 15:51:01.701722 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5gq2t"] Dec 05 15:51:01 crc kubenswrapper[4858]: I1205 15:51:01.912949 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" path="/var/lib/kubelet/pods/1c5ce72b-3ede-48b0-bbf9-095f141e8935/volumes" Dec 05 15:51:05 crc kubenswrapper[4858]: I1205 15:51:05.899401 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:51:05 crc kubenswrapper[4858]: E1205 15:51:05.900176 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:51:18 crc kubenswrapper[4858]: I1205 15:51:18.900150 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:51:18 crc kubenswrapper[4858]: E1205 15:51:18.900930 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:51:31 crc kubenswrapper[4858]: I1205 15:51:31.908423 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:51:31 crc kubenswrapper[4858]: E1205 15:51:31.909170 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:51:45 crc kubenswrapper[4858]: I1205 15:51:45.899943 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:51:45 crc kubenswrapper[4858]: E1205 15:51:45.900785 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:51:56 crc kubenswrapper[4858]: I1205 15:51:56.900446 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:51:56 crc kubenswrapper[4858]: E1205 15:51:56.901842 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:52:09 crc kubenswrapper[4858]: I1205 15:52:09.900074 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:52:09 crc kubenswrapper[4858]: E1205 15:52:09.900972 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:52:24 crc kubenswrapper[4858]: I1205 15:52:24.899627 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:52:24 crc kubenswrapper[4858]: E1205 15:52:24.901211 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:52:39 crc kubenswrapper[4858]: I1205 15:52:39.899249 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:52:39 crc kubenswrapper[4858]: E1205 15:52:39.900907 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:52:54 crc kubenswrapper[4858]: I1205 15:52:54.901331 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:52:54 crc kubenswrapper[4858]: E1205 15:52:54.902369 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:53:09 crc kubenswrapper[4858]: I1205 15:53:09.900027 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:53:09 crc kubenswrapper[4858]: E1205 15:53:09.900985 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:53:23 crc kubenswrapper[4858]: I1205 15:53:23.900036 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:53:23 crc kubenswrapper[4858]: E1205 15:53:23.900690 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:53:36 crc kubenswrapper[4858]: I1205 15:53:36.899907 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:53:36 crc kubenswrapper[4858]: E1205 15:53:36.900736 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 15:53:50 crc kubenswrapper[4858]: I1205 15:53:50.899278 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:53:51 crc kubenswrapper[4858]: I1205 15:53:51.203307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"06101e90e0c8d4263fff741745c02d70e3ef2c85884f140efb5a5c6fee8a12b8"} Dec 05 15:54:49 crc kubenswrapper[4858]: I1205 15:54:49.409122 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6xnwj" podUID="992029c2-7acc-4f87-b054-4a062babc670" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 15:54:49 crc kubenswrapper[4858]: I1205 15:54:49.410671 4858 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.391813603s: [/var/lib/containers/storage/overlay/a46d98a77deb48ac2ec2917a679b482c9b9cbf672b0cf23b9db6bf334241d577/diff /var/log/pods/openstack_ovn-controller-metrics-wrph5_994a3e0f-1bc4-4b50-9f4f-dfc07fe5ce8f/openstack-network-exporter/0.log]; will not log again for this container unless duration exceeds 2s Dec 05 15:54:49 crc kubenswrapper[4858]: I1205 15:54:49.419201 4858 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.455709252s: [/var/lib/containers/storage/overlay/09970e24e04bf3e09a81e77258d91877a6ad427fa263af2ca867135e1b294ccd/diff /var/log/pods/openstack_ovsdbserver-sb-0_18eb80fb-2c3b-4c85-b52b-e3a0821ba693/openstack-network-exporter/0.log]; will not log again for this container unless duration exceeds 2s Dec 05 15:54:49 crc kubenswrapper[4858]: I1205 15:54:49.451194 4858 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.487726488s: [/var/lib/containers/storage/overlay/d221271524754f7904643a72521a202fdf6530262fa025a206b3c1fc3d1b8f60/diff /var/log/pods/openstack_ovsdbserver-nb-0_c4c61018-b6f5-488a-948c-7eacd25c0b8e/openstack-network-exporter/0.log]; will not log again for this container unless duration exceeds 2s Dec 05 15:56:14 crc kubenswrapper[4858]: I1205 15:56:14.760253 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:56:14 crc kubenswrapper[4858]: I1205 15:56:14.760757 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:56:44 crc kubenswrapper[4858]: I1205 15:56:44.760386 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:56:44 crc kubenswrapper[4858]: I1205 15:56:44.760984 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:57:02 crc kubenswrapper[4858]: I1205 15:57:02.830291 4858 scope.go:117] "RemoveContainer" containerID="ad922159e1098833ae3db8640031819e0eaa3f9625a2f7bd8100f63cb6a5be0d" Dec 05 15:57:02 crc kubenswrapper[4858]: I1205 15:57:02.882853 4858 scope.go:117] "RemoveContainer" containerID="0f8552d81468174597b4a03f6421032391d755937bcbea1a4aeb0e8b558bb103" Dec 05 15:57:02 crc kubenswrapper[4858]: I1205 15:57:02.908754 4858 scope.go:117] "RemoveContainer" containerID="a8f2ed608d6be839138355bb6f61a3ef5cce9685eeee94d7a51c59e0ba489f59" Dec 05 15:57:02 crc kubenswrapper[4858]: I1205 15:57:02.969457 4858 scope.go:117] "RemoveContainer" containerID="cd42c5afb6e17aa964fe2f311e5cbd0f69cfaea5e637c6cb76001ffe2513721c" Dec 05 15:57:03 crc kubenswrapper[4858]: I1205 15:57:03.026034 4858 scope.go:117] "RemoveContainer" containerID="41dfd0927a3f7631d93dded97112bbef9e6b44ba10e898392f94f9b56fd3a72f" Dec 05 15:57:03 crc kubenswrapper[4858]: I1205 15:57:03.051476 4858 scope.go:117] "RemoveContainer" containerID="3abf1d71723bdefad11b721defa3fc8e77a79501fff80c476e45ecc8325fa903" Dec 05 15:57:14 crc kubenswrapper[4858]: I1205 15:57:14.760206 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:57:14 crc kubenswrapper[4858]: I1205 15:57:14.760811 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 15:57:14 crc kubenswrapper[4858]: I1205 15:57:14.760889 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 15:57:14 crc kubenswrapper[4858]: I1205 15:57:14.761644 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"06101e90e0c8d4263fff741745c02d70e3ef2c85884f140efb5a5c6fee8a12b8"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 15:57:14 crc kubenswrapper[4858]: I1205 15:57:14.761909 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://06101e90e0c8d4263fff741745c02d70e3ef2c85884f140efb5a5c6fee8a12b8" gracePeriod=600 Dec 05 15:57:15 crc kubenswrapper[4858]: I1205 15:57:15.794466 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="06101e90e0c8d4263fff741745c02d70e3ef2c85884f140efb5a5c6fee8a12b8" exitCode=0 Dec 05 15:57:15 crc kubenswrapper[4858]: I1205 15:57:15.794560 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"06101e90e0c8d4263fff741745c02d70e3ef2c85884f140efb5a5c6fee8a12b8"} Dec 05 15:57:15 crc kubenswrapper[4858]: I1205 15:57:15.795130 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df"} Dec 05 15:57:15 crc kubenswrapper[4858]: I1205 15:57:15.795168 4858 scope.go:117] "RemoveContainer" containerID="feef0e70a10f9cf6285253ecbf1b4dc283d7615153b8ecc7d836d792bd436a36" Dec 05 15:59:44 crc kubenswrapper[4858]: I1205 15:59:44.762765 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 15:59:44 crc kubenswrapper[4858]: I1205 15:59:44.763293 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.176449 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h"] Dec 05 16:00:00 crc kubenswrapper[4858]: E1205 16:00:00.177961 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="registry-server" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.177984 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="registry-server" Dec 05 16:00:00 crc kubenswrapper[4858]: E1205 16:00:00.178003 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="extract-content" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.178027 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="extract-content" Dec 05 16:00:00 crc kubenswrapper[4858]: E1205 16:00:00.178052 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="extract-utilities" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.178060 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="extract-utilities" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.178323 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5ce72b-3ede-48b0-bbf9-095f141e8935" containerName="registry-server" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.180151 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.185370 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.185369 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.188261 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h"] Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.322004 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mp5\" (UniqueName: \"kubernetes.io/projected/06e6dfb2-783e-4310-998a-22fe4aa5d74d-kube-api-access-76mp5\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.322385 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e6dfb2-783e-4310-998a-22fe4aa5d74d-secret-volume\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.322517 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e6dfb2-783e-4310-998a-22fe4aa5d74d-config-volume\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.424271 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76mp5\" (UniqueName: \"kubernetes.io/projected/06e6dfb2-783e-4310-998a-22fe4aa5d74d-kube-api-access-76mp5\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.424323 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e6dfb2-783e-4310-998a-22fe4aa5d74d-secret-volume\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.424395 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e6dfb2-783e-4310-998a-22fe4aa5d74d-config-volume\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.425269 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e6dfb2-783e-4310-998a-22fe4aa5d74d-config-volume\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.433368 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e6dfb2-783e-4310-998a-22fe4aa5d74d-secret-volume\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.441057 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76mp5\" (UniqueName: \"kubernetes.io/projected/06e6dfb2-783e-4310-998a-22fe4aa5d74d-kube-api-access-76mp5\") pod \"collect-profiles-29415840-86n6h\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:00 crc kubenswrapper[4858]: I1205 16:00:00.502327 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:01 crc kubenswrapper[4858]: I1205 16:00:01.023897 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h"] Dec 05 16:00:01 crc kubenswrapper[4858]: I1205 16:00:01.503307 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" event={"ID":"06e6dfb2-783e-4310-998a-22fe4aa5d74d","Type":"ContainerStarted","Data":"6a2d171241dceb053700c11796e54845ae410a0fd8aec8c3c713ae8cc766c44f"} Dec 05 16:00:01 crc kubenswrapper[4858]: I1205 16:00:01.503640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" event={"ID":"06e6dfb2-783e-4310-998a-22fe4aa5d74d","Type":"ContainerStarted","Data":"fae5c873a3d2c5d638c995bb3b130f6bab3278646b04c102fca59fd1eeccab0f"} Dec 05 16:00:01 crc kubenswrapper[4858]: I1205 16:00:01.568631 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" podStartSLOduration=1.525624805 podStartE2EDuration="1.525624805s" podCreationTimestamp="2025-12-05 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 16:00:01.516651632 +0000 UTC m=+7410.064249771" watchObservedRunningTime="2025-12-05 16:00:01.525624805 +0000 UTC m=+7410.073222944" Dec 05 16:00:02 crc kubenswrapper[4858]: I1205 16:00:02.514202 4858 generic.go:334] "Generic (PLEG): container finished" podID="06e6dfb2-783e-4310-998a-22fe4aa5d74d" containerID="6a2d171241dceb053700c11796e54845ae410a0fd8aec8c3c713ae8cc766c44f" exitCode=0 Dec 05 16:00:02 crc kubenswrapper[4858]: I1205 16:00:02.514404 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" event={"ID":"06e6dfb2-783e-4310-998a-22fe4aa5d74d","Type":"ContainerDied","Data":"6a2d171241dceb053700c11796e54845ae410a0fd8aec8c3c713ae8cc766c44f"} Dec 05 16:00:03 crc kubenswrapper[4858]: I1205 16:00:03.895554 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.027667 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e6dfb2-783e-4310-998a-22fe4aa5d74d-config-volume\") pod \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.027801 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76mp5\" (UniqueName: \"kubernetes.io/projected/06e6dfb2-783e-4310-998a-22fe4aa5d74d-kube-api-access-76mp5\") pod \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.027872 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e6dfb2-783e-4310-998a-22fe4aa5d74d-secret-volume\") pod \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\" (UID: \"06e6dfb2-783e-4310-998a-22fe4aa5d74d\") " Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.028296 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e6dfb2-783e-4310-998a-22fe4aa5d74d-config-volume" (OuterVolumeSpecName: "config-volume") pod "06e6dfb2-783e-4310-998a-22fe4aa5d74d" (UID: "06e6dfb2-783e-4310-998a-22fe4aa5d74d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.028937 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06e6dfb2-783e-4310-998a-22fe4aa5d74d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.038147 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06e6dfb2-783e-4310-998a-22fe4aa5d74d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "06e6dfb2-783e-4310-998a-22fe4aa5d74d" (UID: "06e6dfb2-783e-4310-998a-22fe4aa5d74d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.038513 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e6dfb2-783e-4310-998a-22fe4aa5d74d-kube-api-access-76mp5" (OuterVolumeSpecName: "kube-api-access-76mp5") pod "06e6dfb2-783e-4310-998a-22fe4aa5d74d" (UID: "06e6dfb2-783e-4310-998a-22fe4aa5d74d"). InnerVolumeSpecName "kube-api-access-76mp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.131405 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/06e6dfb2-783e-4310-998a-22fe4aa5d74d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.131433 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76mp5\" (UniqueName: \"kubernetes.io/projected/06e6dfb2-783e-4310-998a-22fe4aa5d74d-kube-api-access-76mp5\") on node \"crc\" DevicePath \"\"" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.531526 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" event={"ID":"06e6dfb2-783e-4310-998a-22fe4aa5d74d","Type":"ContainerDied","Data":"fae5c873a3d2c5d638c995bb3b130f6bab3278646b04c102fca59fd1eeccab0f"} Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.531583 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae5c873a3d2c5d638c995bb3b130f6bab3278646b04c102fca59fd1eeccab0f" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.531860 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415840-86n6h" Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.605368 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz"] Dec 05 16:00:04 crc kubenswrapper[4858]: I1205 16:00:04.614477 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415795-wxsvz"] Dec 05 16:00:05 crc kubenswrapper[4858]: I1205 16:00:05.911151 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12eda759-b210-484c-872f-f79d16e87084" path="/var/lib/kubelet/pods/12eda759-b210-484c-872f-f79d16e87084/volumes" Dec 05 16:00:14 crc kubenswrapper[4858]: I1205 16:00:14.759648 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:00:14 crc kubenswrapper[4858]: I1205 16:00:14.760162 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.760346 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.761141 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.764843 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.767082 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.767290 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" gracePeriod=600 Dec 05 16:00:44 crc kubenswrapper[4858]: E1205 16:00:44.895553 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.907016 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" exitCode=0 Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.907069 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df"} Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.907110 4858 scope.go:117] "RemoveContainer" containerID="06101e90e0c8d4263fff741745c02d70e3ef2c85884f140efb5a5c6fee8a12b8" Dec 05 16:00:44 crc kubenswrapper[4858]: I1205 16:00:44.907866 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:00:44 crc kubenswrapper[4858]: E1205 16:00:44.908165 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:00:58 crc kubenswrapper[4858]: I1205 16:00:58.898991 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:00:58 crc kubenswrapper[4858]: E1205 16:00:58.899667 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.152437 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29415841-9qp2f"] Dec 05 16:01:00 crc kubenswrapper[4858]: E1205 16:01:00.153143 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e6dfb2-783e-4310-998a-22fe4aa5d74d" containerName="collect-profiles" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.153157 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e6dfb2-783e-4310-998a-22fe4aa5d74d" containerName="collect-profiles" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.153554 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e6dfb2-783e-4310-998a-22fe4aa5d74d" containerName="collect-profiles" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.154204 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.172875 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415841-9qp2f"] Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.217846 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44nbz\" (UniqueName: \"kubernetes.io/projected/de6bf582-3ee0-4994-9285-bc52b04ec882-kube-api-access-44nbz\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.217928 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-fernet-keys\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.218283 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-combined-ca-bundle\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.218511 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-config-data\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.320349 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44nbz\" (UniqueName: \"kubernetes.io/projected/de6bf582-3ee0-4994-9285-bc52b04ec882-kube-api-access-44nbz\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.320434 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-fernet-keys\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.320534 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-combined-ca-bundle\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.320576 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-config-data\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.327577 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-fernet-keys\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.328960 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-combined-ca-bundle\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.330857 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-config-data\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.366021 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44nbz\" (UniqueName: \"kubernetes.io/projected/de6bf582-3ee0-4994-9285-bc52b04ec882-kube-api-access-44nbz\") pod \"keystone-cron-29415841-9qp2f\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:00 crc kubenswrapper[4858]: I1205 16:01:00.473218 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:01 crc kubenswrapper[4858]: I1205 16:01:01.074308 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415841-9qp2f"] Dec 05 16:01:02 crc kubenswrapper[4858]: I1205 16:01:02.091444 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415841-9qp2f" event={"ID":"de6bf582-3ee0-4994-9285-bc52b04ec882","Type":"ContainerStarted","Data":"0443bf3dc4203787ee62e2d2dcba3c2c71c4f9a5e7a9f98586d713f1c7621a9a"} Dec 05 16:01:02 crc kubenswrapper[4858]: I1205 16:01:02.091782 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415841-9qp2f" event={"ID":"de6bf582-3ee0-4994-9285-bc52b04ec882","Type":"ContainerStarted","Data":"00f4ae55a6cbe1c4ddc8b24fafe935418b730ec7c986e2b1e4e6ea215a091c7b"} Dec 05 16:01:02 crc kubenswrapper[4858]: I1205 16:01:02.117282 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29415841-9qp2f" podStartSLOduration=2.117262599 podStartE2EDuration="2.117262599s" podCreationTimestamp="2025-12-05 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 16:01:02.108246505 +0000 UTC m=+7470.655844654" watchObservedRunningTime="2025-12-05 16:01:02.117262599 +0000 UTC m=+7470.664860738" Dec 05 16:01:03 crc kubenswrapper[4858]: I1205 16:01:03.247385 4858 scope.go:117] "RemoveContainer" containerID="afb30febab676670c687e46555fd9ef3fca58fc1eb16e33bba1e539f79f82413" Dec 05 16:01:05 crc kubenswrapper[4858]: I1205 16:01:05.874694 4858 generic.go:334] "Generic (PLEG): container finished" podID="de6bf582-3ee0-4994-9285-bc52b04ec882" containerID="0443bf3dc4203787ee62e2d2dcba3c2c71c4f9a5e7a9f98586d713f1c7621a9a" exitCode=0 Dec 05 16:01:05 crc kubenswrapper[4858]: I1205 16:01:05.874942 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415841-9qp2f" event={"ID":"de6bf582-3ee0-4994-9285-bc52b04ec882","Type":"ContainerDied","Data":"0443bf3dc4203787ee62e2d2dcba3c2c71c4f9a5e7a9f98586d713f1c7621a9a"} Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.254158 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.362261 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-fernet-keys\") pod \"de6bf582-3ee0-4994-9285-bc52b04ec882\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.362367 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-config-data\") pod \"de6bf582-3ee0-4994-9285-bc52b04ec882\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.362515 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44nbz\" (UniqueName: \"kubernetes.io/projected/de6bf582-3ee0-4994-9285-bc52b04ec882-kube-api-access-44nbz\") pod \"de6bf582-3ee0-4994-9285-bc52b04ec882\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.362544 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-combined-ca-bundle\") pod \"de6bf582-3ee0-4994-9285-bc52b04ec882\" (UID: \"de6bf582-3ee0-4994-9285-bc52b04ec882\") " Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.372180 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6bf582-3ee0-4994-9285-bc52b04ec882-kube-api-access-44nbz" (OuterVolumeSpecName: "kube-api-access-44nbz") pod "de6bf582-3ee0-4994-9285-bc52b04ec882" (UID: "de6bf582-3ee0-4994-9285-bc52b04ec882"). InnerVolumeSpecName "kube-api-access-44nbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.374000 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "de6bf582-3ee0-4994-9285-bc52b04ec882" (UID: "de6bf582-3ee0-4994-9285-bc52b04ec882"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.416607 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-config-data" (OuterVolumeSpecName: "config-data") pod "de6bf582-3ee0-4994-9285-bc52b04ec882" (UID: "de6bf582-3ee0-4994-9285-bc52b04ec882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.424347 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de6bf582-3ee0-4994-9285-bc52b04ec882" (UID: "de6bf582-3ee0-4994-9285-bc52b04ec882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.464462 4858 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.464494 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.464507 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44nbz\" (UniqueName: \"kubernetes.io/projected/de6bf582-3ee0-4994-9285-bc52b04ec882-kube-api-access-44nbz\") on node \"crc\" DevicePath \"\"" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.464521 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de6bf582-3ee0-4994-9285-bc52b04ec882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.893029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415841-9qp2f" event={"ID":"de6bf582-3ee0-4994-9285-bc52b04ec882","Type":"ContainerDied","Data":"00f4ae55a6cbe1c4ddc8b24fafe935418b730ec7c986e2b1e4e6ea215a091c7b"} Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.893499 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00f4ae55a6cbe1c4ddc8b24fafe935418b730ec7c986e2b1e4e6ea215a091c7b" Dec 05 16:01:07 crc kubenswrapper[4858]: I1205 16:01:07.893135 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415841-9qp2f" Dec 05 16:01:09 crc kubenswrapper[4858]: I1205 16:01:09.900989 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:01:09 crc kubenswrapper[4858]: E1205 16:01:09.901808 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:01:21 crc kubenswrapper[4858]: I1205 16:01:21.917541 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:01:21 crc kubenswrapper[4858]: E1205 16:01:21.918503 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:01:35 crc kubenswrapper[4858]: I1205 16:01:35.899546 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:01:35 crc kubenswrapper[4858]: E1205 16:01:35.900404 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:01:46 crc kubenswrapper[4858]: I1205 16:01:46.899339 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:01:46 crc kubenswrapper[4858]: E1205 16:01:46.900027 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:01:59 crc kubenswrapper[4858]: I1205 16:01:59.900105 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:01:59 crc kubenswrapper[4858]: E1205 16:01:59.901665 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:02:13 crc kubenswrapper[4858]: I1205 16:02:13.905534 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:02:13 crc kubenswrapper[4858]: E1205 16:02:13.906416 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:02:27 crc kubenswrapper[4858]: I1205 16:02:27.899195 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:02:27 crc kubenswrapper[4858]: E1205 16:02:27.899919 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:02:39 crc kubenswrapper[4858]: I1205 16:02:39.899973 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:02:39 crc kubenswrapper[4858]: E1205 16:02:39.900731 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:02:53 crc kubenswrapper[4858]: I1205 16:02:53.900609 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:02:53 crc kubenswrapper[4858]: E1205 16:02:53.901549 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:03:05 crc kubenswrapper[4858]: I1205 16:03:05.899637 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:03:05 crc kubenswrapper[4858]: E1205 16:03:05.900506 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:03:16 crc kubenswrapper[4858]: I1205 16:03:16.898914 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:03:16 crc kubenswrapper[4858]: E1205 16:03:16.899586 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:03:29 crc kubenswrapper[4858]: I1205 16:03:29.901140 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:03:29 crc kubenswrapper[4858]: E1205 16:03:29.901754 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:03:44 crc kubenswrapper[4858]: I1205 16:03:44.899106 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:03:44 crc kubenswrapper[4858]: E1205 16:03:44.899891 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:03:45 crc kubenswrapper[4858]: I1205 16:03:45.967448 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cf977bbb5-pnk47"] Dec 05 16:03:45 crc kubenswrapper[4858]: E1205 16:03:45.968531 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6bf582-3ee0-4994-9285-bc52b04ec882" containerName="keystone-cron" Dec 05 16:03:45 crc kubenswrapper[4858]: I1205 16:03:45.968553 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6bf582-3ee0-4994-9285-bc52b04ec882" containerName="keystone-cron" Dec 05 16:03:45 crc kubenswrapper[4858]: I1205 16:03:45.968792 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6bf582-3ee0-4994-9285-bc52b04ec882" containerName="keystone-cron" Dec 05 16:03:45 crc kubenswrapper[4858]: I1205 16:03:45.977672 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:45 crc kubenswrapper[4858]: I1205 16:03:45.985815 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cf977bbb5-pnk47"] Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.087307 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-public-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.087937 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-config\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.087977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-ovndb-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.088096 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-internal-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.088131 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4l6\" (UniqueName: \"kubernetes.io/projected/400902d6-e109-4300-9cb5-27c3e8c2b427-kube-api-access-zn4l6\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.088440 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-httpd-config\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.088492 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-combined-ca-bundle\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190175 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-internal-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190218 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4l6\" (UniqueName: \"kubernetes.io/projected/400902d6-e109-4300-9cb5-27c3e8c2b427-kube-api-access-zn4l6\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190449 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-httpd-config\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190482 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-combined-ca-bundle\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190525 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-public-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190545 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-config\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.190562 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-ovndb-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.199029 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-public-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.199393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-internal-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.199526 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-ovndb-tls-certs\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.200034 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-httpd-config\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.200495 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-config\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.204747 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/400902d6-e109-4300-9cb5-27c3e8c2b427-combined-ca-bundle\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.210393 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4l6\" (UniqueName: \"kubernetes.io/projected/400902d6-e109-4300-9cb5-27c3e8c2b427-kube-api-access-zn4l6\") pod \"neutron-cf977bbb5-pnk47\" (UID: \"400902d6-e109-4300-9cb5-27c3e8c2b427\") " pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:46 crc kubenswrapper[4858]: I1205 16:03:46.317653 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:47 crc kubenswrapper[4858]: I1205 16:03:47.004190 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cf977bbb5-pnk47"] Dec 05 16:03:47 crc kubenswrapper[4858]: I1205 16:03:47.435488 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf977bbb5-pnk47" event={"ID":"400902d6-e109-4300-9cb5-27c3e8c2b427","Type":"ContainerStarted","Data":"b6f80b404de70692b620772e150ab3cccafcfa1295e1b07ed459efe27e85088c"} Dec 05 16:03:47 crc kubenswrapper[4858]: I1205 16:03:47.435814 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:03:47 crc kubenswrapper[4858]: I1205 16:03:47.435858 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf977bbb5-pnk47" event={"ID":"400902d6-e109-4300-9cb5-27c3e8c2b427","Type":"ContainerStarted","Data":"a11f31e832d679416ba18fc36c1b1232c23d42d434fd5118f9898bfada66580b"} Dec 05 16:03:47 crc kubenswrapper[4858]: I1205 16:03:47.435871 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf977bbb5-pnk47" event={"ID":"400902d6-e109-4300-9cb5-27c3e8c2b427","Type":"ContainerStarted","Data":"b5ca778e49fa35403d432aacecfbda8f8e702235e302917dac23a11cf2527bce"} Dec 05 16:03:47 crc kubenswrapper[4858]: I1205 16:03:47.458871 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cf977bbb5-pnk47" podStartSLOduration=2.4588198390000002 podStartE2EDuration="2.458819839s" podCreationTimestamp="2025-12-05 16:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 16:03:47.453560537 +0000 UTC m=+7636.001158706" watchObservedRunningTime="2025-12-05 16:03:47.458819839 +0000 UTC m=+7636.006417978" Dec 05 16:03:55 crc kubenswrapper[4858]: I1205 16:03:55.900281 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:03:55 crc kubenswrapper[4858]: E1205 16:03:55.901093 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:04:09 crc kubenswrapper[4858]: I1205 16:04:09.901755 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:04:09 crc kubenswrapper[4858]: E1205 16:04:09.902612 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:04:16 crc kubenswrapper[4858]: I1205 16:04:16.335560 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cf977bbb5-pnk47" Dec 05 16:04:16 crc kubenswrapper[4858]: I1205 16:04:16.437137 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-766f4465bf-nsk26"] Dec 05 16:04:16 crc kubenswrapper[4858]: I1205 16:04:16.444468 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-766f4465bf-nsk26" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-api" containerID="cri-o://c36c3837d0105467715ced1dd7c74240da14da3530f6090d32afbc0607ecee27" gracePeriod=30 Dec 05 16:04:16 crc kubenswrapper[4858]: I1205 16:04:16.444900 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-766f4465bf-nsk26" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-httpd" containerID="cri-o://6abb36cecb5ed2e30e90590c773e29c3064b3f212d88b8aa9308162d625a0c26" gracePeriod=30 Dec 05 16:04:16 crc kubenswrapper[4858]: I1205 16:04:16.689006 4858 generic.go:334] "Generic (PLEG): container finished" podID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerID="6abb36cecb5ed2e30e90590c773e29c3064b3f212d88b8aa9308162d625a0c26" exitCode=0 Dec 05 16:04:16 crc kubenswrapper[4858]: I1205 16:04:16.689260 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f4465bf-nsk26" event={"ID":"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b","Type":"ContainerDied","Data":"6abb36cecb5ed2e30e90590c773e29c3064b3f212d88b8aa9308162d625a0c26"} Dec 05 16:04:18 crc kubenswrapper[4858]: I1205 16:04:18.726885 4858 generic.go:334] "Generic (PLEG): container finished" podID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerID="c36c3837d0105467715ced1dd7c74240da14da3530f6090d32afbc0607ecee27" exitCode=0 Dec 05 16:04:18 crc kubenswrapper[4858]: I1205 16:04:18.726926 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f4465bf-nsk26" event={"ID":"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b","Type":"ContainerDied","Data":"c36c3837d0105467715ced1dd7c74240da14da3530f6090d32afbc0607ecee27"} Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.324547 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766f4465bf-nsk26" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.440774 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-internal-tls-certs\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.440919 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-config\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.441043 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-combined-ca-bundle\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.441075 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-public-tls-certs\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.441135 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-ovndb-tls-certs\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.441158 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c9fj\" (UniqueName: \"kubernetes.io/projected/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-kube-api-access-6c9fj\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.441262 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-httpd-config\") pod \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\" (UID: \"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b\") " Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.464127 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-kube-api-access-6c9fj" (OuterVolumeSpecName: "kube-api-access-6c9fj") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "kube-api-access-6c9fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.477209 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.543772 4858 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.544121 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c9fj\" (UniqueName: \"kubernetes.io/projected/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-kube-api-access-6c9fj\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.544196 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.544758 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.548333 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-config" (OuterVolumeSpecName: "config") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.550340 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.568616 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" (UID: "bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.645452 4858 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.645482 4858 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.645492 4858 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.645501 4858 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.645511 4858 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b-config\") on node \"crc\" DevicePath \"\"" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.735724 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f4465bf-nsk26" event={"ID":"bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b","Type":"ContainerDied","Data":"50e7fe74aa3fcba3f6272dfe1f043a8ce3c3132be166e9d07b51a888e229ea2d"} Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.735769 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766f4465bf-nsk26" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.735774 4858 scope.go:117] "RemoveContainer" containerID="6abb36cecb5ed2e30e90590c773e29c3064b3f212d88b8aa9308162d625a0c26" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.762959 4858 scope.go:117] "RemoveContainer" containerID="c36c3837d0105467715ced1dd7c74240da14da3530f6090d32afbc0607ecee27" Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.781025 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-766f4465bf-nsk26"] Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.788897 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-766f4465bf-nsk26"] Dec 05 16:04:19 crc kubenswrapper[4858]: I1205 16:04:19.910026 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" path="/var/lib/kubelet/pods/bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b/volumes" Dec 05 16:04:24 crc kubenswrapper[4858]: I1205 16:04:24.899704 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:04:24 crc kubenswrapper[4858]: E1205 16:04:24.900626 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:04:35 crc kubenswrapper[4858]: I1205 16:04:35.899169 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:04:35 crc kubenswrapper[4858]: E1205 16:04:35.899750 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:04:48 crc kubenswrapper[4858]: I1205 16:04:48.899494 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:04:48 crc kubenswrapper[4858]: E1205 16:04:48.900291 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:04:59 crc kubenswrapper[4858]: I1205 16:04:59.899709 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:04:59 crc kubenswrapper[4858]: E1205 16:04:59.900389 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:05:12 crc kubenswrapper[4858]: I1205 16:05:12.899873 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:05:12 crc kubenswrapper[4858]: E1205 16:05:12.900501 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.541662 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zlrbg"] Dec 05 16:05:13 crc kubenswrapper[4858]: E1205 16:05:13.542064 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-httpd" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.542080 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-httpd" Dec 05 16:05:13 crc kubenswrapper[4858]: E1205 16:05:13.542099 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-api" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.542105 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-api" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.542333 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-httpd" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.542371 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbeb1b7-784e-4734-b0a7-4d6ba7b7ad3b" containerName="neutron-api" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.546595 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.563543 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlrbg"] Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.593396 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98qdz\" (UniqueName: \"kubernetes.io/projected/e061ecc8-f69e-4593-83d8-bceb12e29cb9-kube-api-access-98qdz\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.593816 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-utilities\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.593861 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-catalog-content\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.695674 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98qdz\" (UniqueName: \"kubernetes.io/projected/e061ecc8-f69e-4593-83d8-bceb12e29cb9-kube-api-access-98qdz\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.695849 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-utilities\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.695878 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-catalog-content\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.696400 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-catalog-content\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.696520 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-utilities\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.717902 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98qdz\" (UniqueName: \"kubernetes.io/projected/e061ecc8-f69e-4593-83d8-bceb12e29cb9-kube-api-access-98qdz\") pod \"community-operators-zlrbg\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.747782 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f42rz"] Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.749900 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.765810 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f42rz"] Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.798592 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv2gw\" (UniqueName: \"kubernetes.io/projected/420fa739-a306-4bb1-9ace-6a607ee51b08-kube-api-access-xv2gw\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.798719 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-utilities\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.798743 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-catalog-content\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.867715 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.900119 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-utilities\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.900170 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-catalog-content\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.900228 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv2gw\" (UniqueName: \"kubernetes.io/projected/420fa739-a306-4bb1-9ace-6a607ee51b08-kube-api-access-xv2gw\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.901307 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-utilities\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.901539 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-catalog-content\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:13 crc kubenswrapper[4858]: I1205 16:05:13.922095 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv2gw\" (UniqueName: \"kubernetes.io/projected/420fa739-a306-4bb1-9ace-6a607ee51b08-kube-api-access-xv2gw\") pod \"certified-operators-f42rz\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:14 crc kubenswrapper[4858]: I1205 16:05:14.106396 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:14 crc kubenswrapper[4858]: I1205 16:05:14.363054 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zlrbg"] Dec 05 16:05:14 crc kubenswrapper[4858]: I1205 16:05:14.579055 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f42rz"] Dec 05 16:05:14 crc kubenswrapper[4858]: W1205 16:05:14.629232 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420fa739_a306_4bb1_9ace_6a607ee51b08.slice/crio-939f070f22d535055bdeb6846ab98682eb6c632dd7132bf178aad663e05a05e6 WatchSource:0}: Error finding container 939f070f22d535055bdeb6846ab98682eb6c632dd7132bf178aad663e05a05e6: Status 404 returned error can't find the container with id 939f070f22d535055bdeb6846ab98682eb6c632dd7132bf178aad663e05a05e6 Dec 05 16:05:15 crc kubenswrapper[4858]: E1205 16:05:15.216664 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420fa739_a306_4bb1_9ace_6a607ee51b08.slice/crio-83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989.scope\": RecentStats: unable to find data in memory cache]" Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.219465 4858 generic.go:334] "Generic (PLEG): container finished" podID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerID="83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989" exitCode=0 Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.219513 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerDied","Data":"83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989"} Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.219565 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerStarted","Data":"939f070f22d535055bdeb6846ab98682eb6c632dd7132bf178aad663e05a05e6"} Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.222247 4858 generic.go:334] "Generic (PLEG): container finished" podID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerID="4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3" exitCode=0 Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.222309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerDied","Data":"4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3"} Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.222335 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerStarted","Data":"813796117c050a80aa870f41c6b4fa375a8f04c34fa2c4ca035609fbd90ba7e1"} Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.224253 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.947378 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xq7qc"] Dec 05 16:05:15 crc kubenswrapper[4858]: I1205 16:05:15.952658 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.012871 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xq7qc"] Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.046053 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-utilities\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.046388 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kj55\" (UniqueName: \"kubernetes.io/projected/47e3fe33-2dbe-42f6-a9f0-88092c943414-kube-api-access-8kj55\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.046543 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-catalog-content\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.140073 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8vtk"] Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.142081 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.148112 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kj55\" (UniqueName: \"kubernetes.io/projected/47e3fe33-2dbe-42f6-a9f0-88092c943414-kube-api-access-8kj55\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.148180 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-catalog-content\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.148288 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-utilities\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.148690 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-utilities\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.149199 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-catalog-content\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.155408 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8vtk"] Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.200850 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kj55\" (UniqueName: \"kubernetes.io/projected/47e3fe33-2dbe-42f6-a9f0-88092c943414-kube-api-access-8kj55\") pod \"redhat-marketplace-xq7qc\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.249573 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-catalog-content\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.249684 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7smk7\" (UniqueName: \"kubernetes.io/projected/ed8b9276-f428-4103-9cdb-2a867e287256-kube-api-access-7smk7\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.249714 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-utilities\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.285026 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.351416 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-catalog-content\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.351512 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7smk7\" (UniqueName: \"kubernetes.io/projected/ed8b9276-f428-4103-9cdb-2a867e287256-kube-api-access-7smk7\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.351533 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-utilities\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.352066 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-catalog-content\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.352094 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-utilities\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.369182 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7smk7\" (UniqueName: \"kubernetes.io/projected/ed8b9276-f428-4103-9cdb-2a867e287256-kube-api-access-7smk7\") pod \"redhat-operators-k8vtk\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.474801 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.786806 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xq7qc"] Dec 05 16:05:16 crc kubenswrapper[4858]: W1205 16:05:16.800480 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47e3fe33_2dbe_42f6_a9f0_88092c943414.slice/crio-1c3d656f5ba1cfd692f5207ac9fb5adfd77e0d833e304e2af8cd38dfae0b0a2c WatchSource:0}: Error finding container 1c3d656f5ba1cfd692f5207ac9fb5adfd77e0d833e304e2af8cd38dfae0b0a2c: Status 404 returned error can't find the container with id 1c3d656f5ba1cfd692f5207ac9fb5adfd77e0d833e304e2af8cd38dfae0b0a2c Dec 05 16:05:16 crc kubenswrapper[4858]: I1205 16:05:16.967749 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8vtk"] Dec 05 16:05:16 crc kubenswrapper[4858]: W1205 16:05:16.970162 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded8b9276_f428_4103_9cdb_2a867e287256.slice/crio-be65462ca05e6c5e4d33ec4bbce26c10a4d045dffa72e1e1cd0934d5fe766c3a WatchSource:0}: Error finding container be65462ca05e6c5e4d33ec4bbce26c10a4d045dffa72e1e1cd0934d5fe766c3a: Status 404 returned error can't find the container with id be65462ca05e6c5e4d33ec4bbce26c10a4d045dffa72e1e1cd0934d5fe766c3a Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.244021 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerStarted","Data":"e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433"} Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.246410 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed8b9276-f428-4103-9cdb-2a867e287256" containerID="cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00" exitCode=0 Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.246487 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerDied","Data":"cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00"} Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.246511 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerStarted","Data":"be65462ca05e6c5e4d33ec4bbce26c10a4d045dffa72e1e1cd0934d5fe766c3a"} Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.250018 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerStarted","Data":"9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d"} Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.253235 4858 generic.go:334] "Generic (PLEG): container finished" podID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerID="eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f" exitCode=0 Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.253281 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerDied","Data":"eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f"} Dec 05 16:05:17 crc kubenswrapper[4858]: I1205 16:05:17.253303 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerStarted","Data":"1c3d656f5ba1cfd692f5207ac9fb5adfd77e0d833e304e2af8cd38dfae0b0a2c"} Dec 05 16:05:20 crc kubenswrapper[4858]: I1205 16:05:20.276143 4858 generic.go:334] "Generic (PLEG): container finished" podID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerID="e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433" exitCode=0 Dec 05 16:05:20 crc kubenswrapper[4858]: I1205 16:05:20.276197 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerDied","Data":"e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433"} Dec 05 16:05:20 crc kubenswrapper[4858]: I1205 16:05:20.280398 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerStarted","Data":"446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74"} Dec 05 16:05:20 crc kubenswrapper[4858]: I1205 16:05:20.283733 4858 generic.go:334] "Generic (PLEG): container finished" podID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerID="9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d" exitCode=0 Dec 05 16:05:20 crc kubenswrapper[4858]: I1205 16:05:20.283848 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerDied","Data":"9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d"} Dec 05 16:05:20 crc kubenswrapper[4858]: I1205 16:05:20.286119 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerStarted","Data":"4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af"} Dec 05 16:05:21 crc kubenswrapper[4858]: I1205 16:05:21.297110 4858 generic.go:334] "Generic (PLEG): container finished" podID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerID="4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af" exitCode=0 Dec 05 16:05:21 crc kubenswrapper[4858]: I1205 16:05:21.297279 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerDied","Data":"4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af"} Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.316628 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerStarted","Data":"33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29"} Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.319393 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerStarted","Data":"33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b"} Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.322640 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerStarted","Data":"9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689"} Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.343299 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xq7qc" podStartSLOduration=3.780121499 podStartE2EDuration="8.34326966s" podCreationTimestamp="2025-12-05 16:05:15 +0000 UTC" firstStartedPulling="2025-12-05 16:05:17.256178527 +0000 UTC m=+7725.803776666" lastFinishedPulling="2025-12-05 16:05:21.819326688 +0000 UTC m=+7730.366924827" observedRunningTime="2025-12-05 16:05:23.340507805 +0000 UTC m=+7731.888105954" watchObservedRunningTime="2025-12-05 16:05:23.34326966 +0000 UTC m=+7731.890867789" Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.366602 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f42rz" podStartSLOduration=3.898163451 podStartE2EDuration="10.366578891s" podCreationTimestamp="2025-12-05 16:05:13 +0000 UTC" firstStartedPulling="2025-12-05 16:05:15.221927388 +0000 UTC m=+7723.769525527" lastFinishedPulling="2025-12-05 16:05:21.690342828 +0000 UTC m=+7730.237940967" observedRunningTime="2025-12-05 16:05:23.355351157 +0000 UTC m=+7731.902949296" watchObservedRunningTime="2025-12-05 16:05:23.366578891 +0000 UTC m=+7731.914177030" Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.371722 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zlrbg" podStartSLOduration=3.879591847 podStartE2EDuration="10.371701709s" podCreationTimestamp="2025-12-05 16:05:13 +0000 UTC" firstStartedPulling="2025-12-05 16:05:15.224472926 +0000 UTC m=+7723.772071055" lastFinishedPulling="2025-12-05 16:05:21.716582778 +0000 UTC m=+7730.264180917" observedRunningTime="2025-12-05 16:05:23.370157067 +0000 UTC m=+7731.917755216" watchObservedRunningTime="2025-12-05 16:05:23.371701709 +0000 UTC m=+7731.919299848" Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.868433 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:23 crc kubenswrapper[4858]: I1205 16:05:23.868496 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:24 crc kubenswrapper[4858]: I1205 16:05:24.108107 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:24 crc kubenswrapper[4858]: I1205 16:05:24.108701 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:24 crc kubenswrapper[4858]: I1205 16:05:24.337432 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed8b9276-f428-4103-9cdb-2a867e287256" containerID="446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74" exitCode=0 Dec 05 16:05:24 crc kubenswrapper[4858]: I1205 16:05:24.338305 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerDied","Data":"446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74"} Dec 05 16:05:24 crc kubenswrapper[4858]: I1205 16:05:24.935429 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zlrbg" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:24 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:24 crc kubenswrapper[4858]: > Dec 05 16:05:25 crc kubenswrapper[4858]: I1205 16:05:25.194652 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f42rz" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:25 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:25 crc kubenswrapper[4858]: > Dec 05 16:05:25 crc kubenswrapper[4858]: I1205 16:05:25.347527 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerStarted","Data":"e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57"} Dec 05 16:05:25 crc kubenswrapper[4858]: I1205 16:05:25.365478 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8vtk" podStartSLOduration=1.6432806 podStartE2EDuration="9.365460702s" podCreationTimestamp="2025-12-05 16:05:16 +0000 UTC" firstStartedPulling="2025-12-05 16:05:17.249334022 +0000 UTC m=+7725.796932161" lastFinishedPulling="2025-12-05 16:05:24.971514124 +0000 UTC m=+7733.519112263" observedRunningTime="2025-12-05 16:05:25.363376196 +0000 UTC m=+7733.910974335" watchObservedRunningTime="2025-12-05 16:05:25.365460702 +0000 UTC m=+7733.913058841" Dec 05 16:05:25 crc kubenswrapper[4858]: I1205 16:05:25.899528 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:05:25 crc kubenswrapper[4858]: E1205 16:05:25.899789 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:05:26 crc kubenswrapper[4858]: I1205 16:05:26.286085 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:26 crc kubenswrapper[4858]: I1205 16:05:26.286439 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:26 crc kubenswrapper[4858]: I1205 16:05:26.476013 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:26 crc kubenswrapper[4858]: I1205 16:05:26.476062 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:27 crc kubenswrapper[4858]: I1205 16:05:27.369235 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xq7qc" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:27 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:27 crc kubenswrapper[4858]: > Dec 05 16:05:27 crc kubenswrapper[4858]: I1205 16:05:27.533939 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8vtk" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:27 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:27 crc kubenswrapper[4858]: > Dec 05 16:05:34 crc kubenswrapper[4858]: I1205 16:05:34.922330 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zlrbg" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:34 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:34 crc kubenswrapper[4858]: > Dec 05 16:05:35 crc kubenswrapper[4858]: I1205 16:05:35.159235 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f42rz" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:35 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:35 crc kubenswrapper[4858]: > Dec 05 16:05:36 crc kubenswrapper[4858]: I1205 16:05:36.353974 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:36 crc kubenswrapper[4858]: I1205 16:05:36.411642 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:36 crc kubenswrapper[4858]: I1205 16:05:36.900303 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:05:36 crc kubenswrapper[4858]: E1205 16:05:36.901017 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:05:37 crc kubenswrapper[4858]: I1205 16:05:37.528771 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8vtk" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:37 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:37 crc kubenswrapper[4858]: > Dec 05 16:05:37 crc kubenswrapper[4858]: I1205 16:05:37.529387 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xq7qc"] Dec 05 16:05:37 crc kubenswrapper[4858]: I1205 16:05:37.531175 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xq7qc" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="registry-server" containerID="cri-o://33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29" gracePeriod=2 Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.359759 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.456313 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-utilities\") pod \"47e3fe33-2dbe-42f6-a9f0-88092c943414\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.456992 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kj55\" (UniqueName: \"kubernetes.io/projected/47e3fe33-2dbe-42f6-a9f0-88092c943414-kube-api-access-8kj55\") pod \"47e3fe33-2dbe-42f6-a9f0-88092c943414\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.457159 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-catalog-content\") pod \"47e3fe33-2dbe-42f6-a9f0-88092c943414\" (UID: \"47e3fe33-2dbe-42f6-a9f0-88092c943414\") " Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.459870 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-utilities" (OuterVolumeSpecName: "utilities") pod "47e3fe33-2dbe-42f6-a9f0-88092c943414" (UID: "47e3fe33-2dbe-42f6-a9f0-88092c943414"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.468645 4858 generic.go:334] "Generic (PLEG): container finished" podID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerID="33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29" exitCode=0 Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.468711 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerDied","Data":"33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29"} Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.468745 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xq7qc" event={"ID":"47e3fe33-2dbe-42f6-a9f0-88092c943414","Type":"ContainerDied","Data":"1c3d656f5ba1cfd692f5207ac9fb5adfd77e0d833e304e2af8cd38dfae0b0a2c"} Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.468768 4858 scope.go:117] "RemoveContainer" containerID="33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.468716 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xq7qc" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.471170 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e3fe33-2dbe-42f6-a9f0-88092c943414-kube-api-access-8kj55" (OuterVolumeSpecName: "kube-api-access-8kj55") pod "47e3fe33-2dbe-42f6-a9f0-88092c943414" (UID: "47e3fe33-2dbe-42f6-a9f0-88092c943414"). InnerVolumeSpecName "kube-api-access-8kj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.493202 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47e3fe33-2dbe-42f6-a9f0-88092c943414" (UID: "47e3fe33-2dbe-42f6-a9f0-88092c943414"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.550523 4858 scope.go:117] "RemoveContainer" containerID="4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.559778 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.559807 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47e3fe33-2dbe-42f6-a9f0-88092c943414-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.559817 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kj55\" (UniqueName: \"kubernetes.io/projected/47e3fe33-2dbe-42f6-a9f0-88092c943414-kube-api-access-8kj55\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.585690 4858 scope.go:117] "RemoveContainer" containerID="eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.637103 4858 scope.go:117] "RemoveContainer" containerID="33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29" Dec 05 16:05:38 crc kubenswrapper[4858]: E1205 16:05:38.639477 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29\": container with ID starting with 33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29 not found: ID does not exist" containerID="33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.639727 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29"} err="failed to get container status \"33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29\": rpc error: code = NotFound desc = could not find container \"33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29\": container with ID starting with 33f8fd31f354e8f8e6a6ebf48998a16650b306e615fa90701ed7359a68c58a29 not found: ID does not exist" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.639772 4858 scope.go:117] "RemoveContainer" containerID="4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af" Dec 05 16:05:38 crc kubenswrapper[4858]: E1205 16:05:38.640229 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af\": container with ID starting with 4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af not found: ID does not exist" containerID="4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.640282 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af"} err="failed to get container status \"4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af\": rpc error: code = NotFound desc = could not find container \"4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af\": container with ID starting with 4282cf7ad7c3e497ff186e98b25cd8116a1ba2315feebeee216740a5859fc7af not found: ID does not exist" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.640314 4858 scope.go:117] "RemoveContainer" containerID="eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f" Dec 05 16:05:38 crc kubenswrapper[4858]: E1205 16:05:38.640665 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f\": container with ID starting with eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f not found: ID does not exist" containerID="eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.640700 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f"} err="failed to get container status \"eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f\": rpc error: code = NotFound desc = could not find container \"eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f\": container with ID starting with eb4be608338fc372363da7b468f0a8a9c1f499f05e24eaa14d2ea9d1ce49496f not found: ID does not exist" Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.818605 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xq7qc"] Dec 05 16:05:38 crc kubenswrapper[4858]: I1205 16:05:38.826750 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xq7qc"] Dec 05 16:05:39 crc kubenswrapper[4858]: I1205 16:05:39.913336 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" path="/var/lib/kubelet/pods/47e3fe33-2dbe-42f6-a9f0-88092c943414/volumes" Dec 05 16:05:43 crc kubenswrapper[4858]: I1205 16:05:43.923619 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:43 crc kubenswrapper[4858]: I1205 16:05:43.972490 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:44 crc kubenswrapper[4858]: I1205 16:05:44.161755 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:44 crc kubenswrapper[4858]: I1205 16:05:44.205922 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:45 crc kubenswrapper[4858]: I1205 16:05:45.158170 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlrbg"] Dec 05 16:05:45 crc kubenswrapper[4858]: I1205 16:05:45.529070 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zlrbg" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="registry-server" containerID="cri-o://9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689" gracePeriod=2 Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.067139 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.204167 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98qdz\" (UniqueName: \"kubernetes.io/projected/e061ecc8-f69e-4593-83d8-bceb12e29cb9-kube-api-access-98qdz\") pod \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.204245 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-utilities\") pod \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.204297 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-catalog-content\") pod \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\" (UID: \"e061ecc8-f69e-4593-83d8-bceb12e29cb9\") " Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.209786 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-utilities" (OuterVolumeSpecName: "utilities") pod "e061ecc8-f69e-4593-83d8-bceb12e29cb9" (UID: "e061ecc8-f69e-4593-83d8-bceb12e29cb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.213117 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e061ecc8-f69e-4593-83d8-bceb12e29cb9-kube-api-access-98qdz" (OuterVolumeSpecName: "kube-api-access-98qdz") pod "e061ecc8-f69e-4593-83d8-bceb12e29cb9" (UID: "e061ecc8-f69e-4593-83d8-bceb12e29cb9"). InnerVolumeSpecName "kube-api-access-98qdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.270972 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e061ecc8-f69e-4593-83d8-bceb12e29cb9" (UID: "e061ecc8-f69e-4593-83d8-bceb12e29cb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.306309 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98qdz\" (UniqueName: \"kubernetes.io/projected/e061ecc8-f69e-4593-83d8-bceb12e29cb9-kube-api-access-98qdz\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.306342 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.306352 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e061ecc8-f69e-4593-83d8-bceb12e29cb9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.540123 4858 generic.go:334] "Generic (PLEG): container finished" podID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerID="9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689" exitCode=0 Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.540173 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerDied","Data":"9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689"} Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.540211 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zlrbg" event={"ID":"e061ecc8-f69e-4593-83d8-bceb12e29cb9","Type":"ContainerDied","Data":"813796117c050a80aa870f41c6b4fa375a8f04c34fa2c4ca035609fbd90ba7e1"} Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.540224 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zlrbg" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.540232 4858 scope.go:117] "RemoveContainer" containerID="9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.560535 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f42rz"] Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.560733 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f42rz" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="registry-server" containerID="cri-o://33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b" gracePeriod=2 Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.564052 4858 scope.go:117] "RemoveContainer" containerID="9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.589005 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zlrbg"] Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.597002 4858 scope.go:117] "RemoveContainer" containerID="4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.600334 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zlrbg"] Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.754811 4858 scope.go:117] "RemoveContainer" containerID="9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689" Dec 05 16:05:46 crc kubenswrapper[4858]: E1205 16:05:46.756107 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689\": container with ID starting with 9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689 not found: ID does not exist" containerID="9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.756183 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689"} err="failed to get container status \"9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689\": rpc error: code = NotFound desc = could not find container \"9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689\": container with ID starting with 9b815a7a6dfce6bd767ec5e9092454b5b97128542b1eaa08cc35910cae72e689 not found: ID does not exist" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.756211 4858 scope.go:117] "RemoveContainer" containerID="9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d" Dec 05 16:05:46 crc kubenswrapper[4858]: E1205 16:05:46.756565 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d\": container with ID starting with 9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d not found: ID does not exist" containerID="9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.756646 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d"} err="failed to get container status \"9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d\": rpc error: code = NotFound desc = could not find container \"9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d\": container with ID starting with 9a4255687d9a8b0da12535e50fc922d50dac5520a64c518e8551f9799640339d not found: ID does not exist" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.756679 4858 scope.go:117] "RemoveContainer" containerID="4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3" Dec 05 16:05:46 crc kubenswrapper[4858]: E1205 16:05:46.757041 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3\": container with ID starting with 4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3 not found: ID does not exist" containerID="4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3" Dec 05 16:05:46 crc kubenswrapper[4858]: I1205 16:05:46.757061 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3"} err="failed to get container status \"4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3\": rpc error: code = NotFound desc = could not find container \"4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3\": container with ID starting with 4188c6dde8fc84cb3cda4b481d89d54a8625bb2112e1d587d43b787c493aa6d3 not found: ID does not exist" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.100378 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.225893 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv2gw\" (UniqueName: \"kubernetes.io/projected/420fa739-a306-4bb1-9ace-6a607ee51b08-kube-api-access-xv2gw\") pod \"420fa739-a306-4bb1-9ace-6a607ee51b08\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.227352 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-catalog-content\") pod \"420fa739-a306-4bb1-9ace-6a607ee51b08\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.227472 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-utilities\") pod \"420fa739-a306-4bb1-9ace-6a607ee51b08\" (UID: \"420fa739-a306-4bb1-9ace-6a607ee51b08\") " Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.228083 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-utilities" (OuterVolumeSpecName: "utilities") pod "420fa739-a306-4bb1-9ace-6a607ee51b08" (UID: "420fa739-a306-4bb1-9ace-6a607ee51b08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.231151 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420fa739-a306-4bb1-9ace-6a607ee51b08-kube-api-access-xv2gw" (OuterVolumeSpecName: "kube-api-access-xv2gw") pod "420fa739-a306-4bb1-9ace-6a607ee51b08" (UID: "420fa739-a306-4bb1-9ace-6a607ee51b08"). InnerVolumeSpecName "kube-api-access-xv2gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.277659 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "420fa739-a306-4bb1-9ace-6a607ee51b08" (UID: "420fa739-a306-4bb1-9ace-6a607ee51b08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.329578 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv2gw\" (UniqueName: \"kubernetes.io/projected/420fa739-a306-4bb1-9ace-6a607ee51b08-kube-api-access-xv2gw\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.329619 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.329631 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/420fa739-a306-4bb1-9ace-6a607ee51b08-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.536427 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8vtk" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" probeResult="failure" output=< Dec 05 16:05:47 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:05:47 crc kubenswrapper[4858]: > Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.553260 4858 generic.go:334] "Generic (PLEG): container finished" podID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerID="33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b" exitCode=0 Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.553301 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerDied","Data":"33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b"} Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.553320 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f42rz" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.553338 4858 scope.go:117] "RemoveContainer" containerID="33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.553326 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f42rz" event={"ID":"420fa739-a306-4bb1-9ace-6a607ee51b08","Type":"ContainerDied","Data":"939f070f22d535055bdeb6846ab98682eb6c632dd7132bf178aad663e05a05e6"} Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.585494 4858 scope.go:117] "RemoveContainer" containerID="e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.592429 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f42rz"] Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.600955 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f42rz"] Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.618895 4858 scope.go:117] "RemoveContainer" containerID="83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.646055 4858 scope.go:117] "RemoveContainer" containerID="33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b" Dec 05 16:05:47 crc kubenswrapper[4858]: E1205 16:05:47.648617 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b\": container with ID starting with 33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b not found: ID does not exist" containerID="33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.648667 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b"} err="failed to get container status \"33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b\": rpc error: code = NotFound desc = could not find container \"33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b\": container with ID starting with 33d0ba04259490200f3b184eb203d631438493c3d1eaee08233c345aac2df40b not found: ID does not exist" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.648701 4858 scope.go:117] "RemoveContainer" containerID="e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433" Dec 05 16:05:47 crc kubenswrapper[4858]: E1205 16:05:47.650796 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433\": container with ID starting with e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433 not found: ID does not exist" containerID="e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.651010 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433"} err="failed to get container status \"e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433\": rpc error: code = NotFound desc = could not find container \"e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433\": container with ID starting with e00ef57e6983268f737901ebe37ff6fc50c1470c2c4054ae1f19934564a02433 not found: ID does not exist" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.651093 4858 scope.go:117] "RemoveContainer" containerID="83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989" Dec 05 16:05:47 crc kubenswrapper[4858]: E1205 16:05:47.651409 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989\": container with ID starting with 83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989 not found: ID does not exist" containerID="83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.651492 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989"} err="failed to get container status \"83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989\": rpc error: code = NotFound desc = could not find container \"83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989\": container with ID starting with 83d1b31e67bb912da612c48e6fa4523a390f45753487d32417d7271af5052989 not found: ID does not exist" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.909284 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" path="/var/lib/kubelet/pods/420fa739-a306-4bb1-9ace-6a607ee51b08/volumes" Dec 05 16:05:47 crc kubenswrapper[4858]: I1205 16:05:47.910343 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" path="/var/lib/kubelet/pods/e061ecc8-f69e-4593-83d8-bceb12e29cb9/volumes" Dec 05 16:05:49 crc kubenswrapper[4858]: I1205 16:05:49.899376 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:05:50 crc kubenswrapper[4858]: I1205 16:05:50.586335 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"c7da50cd86f0b49bf1d17e1fe0fe026c3a6601551cdf0c7e2edebf7f25a9c7e9"} Dec 05 16:05:56 crc kubenswrapper[4858]: I1205 16:05:56.721171 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:56 crc kubenswrapper[4858]: I1205 16:05:56.786701 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:56 crc kubenswrapper[4858]: I1205 16:05:56.966661 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8vtk"] Dec 05 16:05:58 crc kubenswrapper[4858]: I1205 16:05:58.672687 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k8vtk" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" containerID="cri-o://e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57" gracePeriod=2 Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.323369 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.520074 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-catalog-content\") pod \"ed8b9276-f428-4103-9cdb-2a867e287256\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.520163 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-utilities\") pod \"ed8b9276-f428-4103-9cdb-2a867e287256\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.520326 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7smk7\" (UniqueName: \"kubernetes.io/projected/ed8b9276-f428-4103-9cdb-2a867e287256-kube-api-access-7smk7\") pod \"ed8b9276-f428-4103-9cdb-2a867e287256\" (UID: \"ed8b9276-f428-4103-9cdb-2a867e287256\") " Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.522348 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-utilities" (OuterVolumeSpecName: "utilities") pod "ed8b9276-f428-4103-9cdb-2a867e287256" (UID: "ed8b9276-f428-4103-9cdb-2a867e287256"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.536193 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8b9276-f428-4103-9cdb-2a867e287256-kube-api-access-7smk7" (OuterVolumeSpecName: "kube-api-access-7smk7") pod "ed8b9276-f428-4103-9cdb-2a867e287256" (UID: "ed8b9276-f428-4103-9cdb-2a867e287256"). InnerVolumeSpecName "kube-api-access-7smk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.623670 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7smk7\" (UniqueName: \"kubernetes.io/projected/ed8b9276-f428-4103-9cdb-2a867e287256-kube-api-access-7smk7\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.624221 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.647750 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed8b9276-f428-4103-9cdb-2a867e287256" (UID: "ed8b9276-f428-4103-9cdb-2a867e287256"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.683272 4858 generic.go:334] "Generic (PLEG): container finished" podID="ed8b9276-f428-4103-9cdb-2a867e287256" containerID="e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57" exitCode=0 Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.683329 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerDied","Data":"e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57"} Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.683370 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8vtk" event={"ID":"ed8b9276-f428-4103-9cdb-2a867e287256","Type":"ContainerDied","Data":"be65462ca05e6c5e4d33ec4bbce26c10a4d045dffa72e1e1cd0934d5fe766c3a"} Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.683561 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8vtk" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.683703 4858 scope.go:117] "RemoveContainer" containerID="e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.714781 4858 scope.go:117] "RemoveContainer" containerID="446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.726094 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed8b9276-f428-4103-9cdb-2a867e287256-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.732054 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8vtk"] Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.745236 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k8vtk"] Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.748265 4858 scope.go:117] "RemoveContainer" containerID="cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.792605 4858 scope.go:117] "RemoveContainer" containerID="e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57" Dec 05 16:05:59 crc kubenswrapper[4858]: E1205 16:05:59.793434 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57\": container with ID starting with e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57 not found: ID does not exist" containerID="e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.793747 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57"} err="failed to get container status \"e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57\": rpc error: code = NotFound desc = could not find container \"e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57\": container with ID starting with e85d3e1a6caa2bdf289ef0448a2de79206f7ae1741946e51e59e987ac6115c57 not found: ID does not exist" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.793771 4858 scope.go:117] "RemoveContainer" containerID="446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74" Dec 05 16:05:59 crc kubenswrapper[4858]: E1205 16:05:59.796600 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74\": container with ID starting with 446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74 not found: ID does not exist" containerID="446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.796663 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74"} err="failed to get container status \"446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74\": rpc error: code = NotFound desc = could not find container \"446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74\": container with ID starting with 446d2729392de155614f35988571226912b195bf01c8d34ca61608c10c99df74 not found: ID does not exist" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.796689 4858 scope.go:117] "RemoveContainer" containerID="cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00" Dec 05 16:05:59 crc kubenswrapper[4858]: E1205 16:05:59.797082 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00\": container with ID starting with cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00 not found: ID does not exist" containerID="cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.797108 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00"} err="failed to get container status \"cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00\": rpc error: code = NotFound desc = could not find container \"cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00\": container with ID starting with cb4d7456654ff6b73bcc4c8248ab9611b67cad538fba142ca13b8de066ff6a00 not found: ID does not exist" Dec 05 16:05:59 crc kubenswrapper[4858]: I1205 16:05:59.909997 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" path="/var/lib/kubelet/pods/ed8b9276-f428-4103-9cdb-2a867e287256/volumes" Dec 05 16:06:03 crc kubenswrapper[4858]: E1205 16:06:03.653622 4858 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.174:39098->38.102.83.174:41641: read tcp 38.102.83.174:39098->38.102.83.174:41641: read: connection reset by peer Dec 05 16:08:14 crc kubenswrapper[4858]: I1205 16:08:14.760699 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:08:14 crc kubenswrapper[4858]: I1205 16:08:14.762524 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:08:44 crc kubenswrapper[4858]: I1205 16:08:44.759948 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:08:44 crc kubenswrapper[4858]: I1205 16:08:44.760453 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:09:14 crc kubenswrapper[4858]: I1205 16:09:14.759609 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:09:14 crc kubenswrapper[4858]: I1205 16:09:14.760149 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:09:14 crc kubenswrapper[4858]: I1205 16:09:14.760209 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 16:09:14 crc kubenswrapper[4858]: I1205 16:09:14.762260 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7da50cd86f0b49bf1d17e1fe0fe026c3a6601551cdf0c7e2edebf7f25a9c7e9"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 16:09:14 crc kubenswrapper[4858]: I1205 16:09:14.762858 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://c7da50cd86f0b49bf1d17e1fe0fe026c3a6601551cdf0c7e2edebf7f25a9c7e9" gracePeriod=600 Dec 05 16:09:15 crc kubenswrapper[4858]: I1205 16:09:15.428397 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="c7da50cd86f0b49bf1d17e1fe0fe026c3a6601551cdf0c7e2edebf7f25a9c7e9" exitCode=0 Dec 05 16:09:15 crc kubenswrapper[4858]: I1205 16:09:15.428475 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"c7da50cd86f0b49bf1d17e1fe0fe026c3a6601551cdf0c7e2edebf7f25a9c7e9"} Dec 05 16:09:15 crc kubenswrapper[4858]: I1205 16:09:15.429093 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3"} Dec 05 16:09:15 crc kubenswrapper[4858]: I1205 16:09:15.429181 4858 scope.go:117] "RemoveContainer" containerID="676153d61a2f948abaf74be1020b2e527d63f90007e91d59ab2c12045e61a3df" Dec 05 16:11:44 crc kubenswrapper[4858]: I1205 16:11:44.759569 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:11:44 crc kubenswrapper[4858]: I1205 16:11:44.760106 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:12:14 crc kubenswrapper[4858]: I1205 16:12:14.760293 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:12:14 crc kubenswrapper[4858]: I1205 16:12:14.760922 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:12:44 crc kubenswrapper[4858]: I1205 16:12:44.759595 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:12:44 crc kubenswrapper[4858]: I1205 16:12:44.760207 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:12:44 crc kubenswrapper[4858]: I1205 16:12:44.760273 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 16:12:44 crc kubenswrapper[4858]: I1205 16:12:44.761356 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 16:12:44 crc kubenswrapper[4858]: I1205 16:12:44.761726 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" gracePeriod=600 Dec 05 16:12:44 crc kubenswrapper[4858]: E1205 16:12:44.887725 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:12:45 crc kubenswrapper[4858]: I1205 16:12:45.268134 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" exitCode=0 Dec 05 16:12:45 crc kubenswrapper[4858]: I1205 16:12:45.268229 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3"} Dec 05 16:12:45 crc kubenswrapper[4858]: I1205 16:12:45.268816 4858 scope.go:117] "RemoveContainer" containerID="c7da50cd86f0b49bf1d17e1fe0fe026c3a6601551cdf0c7e2edebf7f25a9c7e9" Dec 05 16:12:45 crc kubenswrapper[4858]: I1205 16:12:45.269101 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:12:45 crc kubenswrapper[4858]: E1205 16:12:45.269640 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:12:56 crc kubenswrapper[4858]: I1205 16:12:56.899289 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:12:56 crc kubenswrapper[4858]: E1205 16:12:56.899955 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:13:10 crc kubenswrapper[4858]: I1205 16:13:10.899656 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:13:10 crc kubenswrapper[4858]: E1205 16:13:10.900430 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:13:23 crc kubenswrapper[4858]: I1205 16:13:23.899875 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:13:23 crc kubenswrapper[4858]: E1205 16:13:23.900590 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:13:37 crc kubenswrapper[4858]: I1205 16:13:37.899145 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:13:37 crc kubenswrapper[4858]: E1205 16:13:37.900034 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:13:50 crc kubenswrapper[4858]: I1205 16:13:50.899199 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:13:50 crc kubenswrapper[4858]: E1205 16:13:50.899957 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:14:05 crc kubenswrapper[4858]: I1205 16:14:05.899346 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:14:05 crc kubenswrapper[4858]: E1205 16:14:05.902248 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:14:17 crc kubenswrapper[4858]: I1205 16:14:17.900004 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:14:17 crc kubenswrapper[4858]: E1205 16:14:17.900744 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:14:29 crc kubenswrapper[4858]: I1205 16:14:29.899549 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:14:29 crc kubenswrapper[4858]: E1205 16:14:29.900256 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:14:40 crc kubenswrapper[4858]: I1205 16:14:40.899314 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:14:40 crc kubenswrapper[4858]: E1205 16:14:40.900051 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:14:53 crc kubenswrapper[4858]: I1205 16:14:53.900117 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:14:53 crc kubenswrapper[4858]: E1205 16:14:53.900880 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.280338 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd"] Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.282793 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.282899 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.282914 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.282924 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.282941 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.282948 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.282966 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.282972 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.282990 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.282997 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283014 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283022 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283040 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283045 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283062 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283071 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283082 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283089 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="extract-content" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283101 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283109 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283124 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283133 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: E1205 16:15:00.283143 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283150 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="extract-utilities" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283555 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="e061ecc8-f69e-4593-83d8-bceb12e29cb9" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283719 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="420fa739-a306-4bb1-9ace-6a607ee51b08" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283806 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8b9276-f428-4103-9cdb-2a867e287256" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.283899 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e3fe33-2dbe-42f6-a9f0-88092c943414" containerName="registry-server" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.286694 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.296495 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.299313 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.302905 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd"] Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.448911 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9fn\" (UniqueName: \"kubernetes.io/projected/10e3d9de-f0f5-4eda-9e42-292bd9931cee-kube-api-access-5m9fn\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.449031 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10e3d9de-f0f5-4eda-9e42-292bd9931cee-secret-volume\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.449115 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e3d9de-f0f5-4eda-9e42-292bd9931cee-config-volume\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.551152 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e3d9de-f0f5-4eda-9e42-292bd9931cee-config-volume\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.551274 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m9fn\" (UniqueName: \"kubernetes.io/projected/10e3d9de-f0f5-4eda-9e42-292bd9931cee-kube-api-access-5m9fn\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.551331 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10e3d9de-f0f5-4eda-9e42-292bd9931cee-secret-volume\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.553289 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e3d9de-f0f5-4eda-9e42-292bd9931cee-config-volume\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.562737 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10e3d9de-f0f5-4eda-9e42-292bd9931cee-secret-volume\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.579634 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m9fn\" (UniqueName: \"kubernetes.io/projected/10e3d9de-f0f5-4eda-9e42-292bd9931cee-kube-api-access-5m9fn\") pod \"collect-profiles-29415855-7pgfd\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:00 crc kubenswrapper[4858]: I1205 16:15:00.633343 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:01 crc kubenswrapper[4858]: I1205 16:15:01.853521 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd"] Dec 05 16:15:02 crc kubenswrapper[4858]: I1205 16:15:02.489151 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" event={"ID":"10e3d9de-f0f5-4eda-9e42-292bd9931cee","Type":"ContainerDied","Data":"02aacefc0c9ec4c198055a95e981f7ec3b5150a69f95ad47f674948b2430aa4a"} Dec 05 16:15:02 crc kubenswrapper[4858]: I1205 16:15:02.489434 4858 generic.go:334] "Generic (PLEG): container finished" podID="10e3d9de-f0f5-4eda-9e42-292bd9931cee" containerID="02aacefc0c9ec4c198055a95e981f7ec3b5150a69f95ad47f674948b2430aa4a" exitCode=0 Dec 05 16:15:02 crc kubenswrapper[4858]: I1205 16:15:02.490029 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" event={"ID":"10e3d9de-f0f5-4eda-9e42-292bd9931cee","Type":"ContainerStarted","Data":"f9e187c3e675390778cb1efc47449365e74c8385ab9f6b9e0269dd61bae9feaa"} Dec 05 16:15:03 crc kubenswrapper[4858]: I1205 16:15:03.918176 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.106722 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e3d9de-f0f5-4eda-9e42-292bd9931cee-config-volume\") pod \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.107066 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m9fn\" (UniqueName: \"kubernetes.io/projected/10e3d9de-f0f5-4eda-9e42-292bd9931cee-kube-api-access-5m9fn\") pod \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.107274 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10e3d9de-f0f5-4eda-9e42-292bd9931cee-secret-volume\") pod \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\" (UID: \"10e3d9de-f0f5-4eda-9e42-292bd9931cee\") " Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.108615 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e3d9de-f0f5-4eda-9e42-292bd9931cee-config-volume" (OuterVolumeSpecName: "config-volume") pod "10e3d9de-f0f5-4eda-9e42-292bd9931cee" (UID: "10e3d9de-f0f5-4eda-9e42-292bd9931cee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.116184 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e3d9de-f0f5-4eda-9e42-292bd9931cee-kube-api-access-5m9fn" (OuterVolumeSpecName: "kube-api-access-5m9fn") pod "10e3d9de-f0f5-4eda-9e42-292bd9931cee" (UID: "10e3d9de-f0f5-4eda-9e42-292bd9931cee"). InnerVolumeSpecName "kube-api-access-5m9fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.116998 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10e3d9de-f0f5-4eda-9e42-292bd9931cee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "10e3d9de-f0f5-4eda-9e42-292bd9931cee" (UID: "10e3d9de-f0f5-4eda-9e42-292bd9931cee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.211175 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10e3d9de-f0f5-4eda-9e42-292bd9931cee-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.211210 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10e3d9de-f0f5-4eda-9e42-292bd9931cee-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.211232 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m9fn\" (UniqueName: \"kubernetes.io/projected/10e3d9de-f0f5-4eda-9e42-292bd9931cee-kube-api-access-5m9fn\") on node \"crc\" DevicePath \"\"" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.508664 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" event={"ID":"10e3d9de-f0f5-4eda-9e42-292bd9931cee","Type":"ContainerDied","Data":"f9e187c3e675390778cb1efc47449365e74c8385ab9f6b9e0269dd61bae9feaa"} Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.509152 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415855-7pgfd" Dec 05 16:15:04 crc kubenswrapper[4858]: I1205 16:15:04.509651 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e187c3e675390778cb1efc47449365e74c8385ab9f6b9e0269dd61bae9feaa" Dec 05 16:15:05 crc kubenswrapper[4858]: I1205 16:15:05.007649 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp"] Dec 05 16:15:05 crc kubenswrapper[4858]: I1205 16:15:05.015660 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415810-4p7cp"] Dec 05 16:15:05 crc kubenswrapper[4858]: I1205 16:15:05.899365 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:15:05 crc kubenswrapper[4858]: E1205 16:15:05.899938 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:15:05 crc kubenswrapper[4858]: I1205 16:15:05.910573 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd76b24-71a2-414a-8e3c-0a8bc7305386" path="/var/lib/kubelet/pods/ecd76b24-71a2-414a-8e3c-0a8bc7305386/volumes" Dec 05 16:15:18 crc kubenswrapper[4858]: I1205 16:15:18.899096 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:15:18 crc kubenswrapper[4858]: E1205 16:15:18.899796 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:15:32 crc kubenswrapper[4858]: I1205 16:15:32.899509 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:15:32 crc kubenswrapper[4858]: E1205 16:15:32.901346 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:15:46 crc kubenswrapper[4858]: I1205 16:15:46.899407 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:15:46 crc kubenswrapper[4858]: E1205 16:15:46.900192 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.242286 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cg5ch"] Dec 05 16:16:00 crc kubenswrapper[4858]: E1205 16:16:00.243347 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e3d9de-f0f5-4eda-9e42-292bd9931cee" containerName="collect-profiles" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.243361 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e3d9de-f0f5-4eda-9e42-292bd9931cee" containerName="collect-profiles" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.243594 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e3d9de-f0f5-4eda-9e42-292bd9931cee" containerName="collect-profiles" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.247378 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.260395 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cg5ch"] Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.445355 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-catalog-content\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.445498 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb7kn\" (UniqueName: \"kubernetes.io/projected/d7bed814-9f9c-4cd2-81d5-ffa920201c73-kube-api-access-lb7kn\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.445814 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-utilities\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.547950 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-catalog-content\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.548250 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb7kn\" (UniqueName: \"kubernetes.io/projected/d7bed814-9f9c-4cd2-81d5-ffa920201c73-kube-api-access-lb7kn\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.548361 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-utilities\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.549538 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-catalog-content\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.549778 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-utilities\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.574467 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb7kn\" (UniqueName: \"kubernetes.io/projected/d7bed814-9f9c-4cd2-81d5-ffa920201c73-kube-api-access-lb7kn\") pod \"community-operators-cg5ch\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:00 crc kubenswrapper[4858]: I1205 16:16:00.603284 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:01 crc kubenswrapper[4858]: I1205 16:16:01.200735 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cg5ch"] Dec 05 16:16:01 crc kubenswrapper[4858]: I1205 16:16:01.907293 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:16:01 crc kubenswrapper[4858]: E1205 16:16:01.907868 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:16:02 crc kubenswrapper[4858]: I1205 16:16:02.065109 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerID="786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32" exitCode=0 Dec 05 16:16:02 crc kubenswrapper[4858]: I1205 16:16:02.065159 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerDied","Data":"786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32"} Dec 05 16:16:02 crc kubenswrapper[4858]: I1205 16:16:02.065186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerStarted","Data":"f20bb7c585c7f8aee45474d599a6976ebb80e6235e6b4104bc58f32a7815b805"} Dec 05 16:16:02 crc kubenswrapper[4858]: I1205 16:16:02.067644 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 16:16:03 crc kubenswrapper[4858]: I1205 16:16:03.268899 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerStarted","Data":"7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3"} Dec 05 16:16:03 crc kubenswrapper[4858]: I1205 16:16:03.817114 4858 scope.go:117] "RemoveContainer" containerID="447d537ca6f23c8007a0dcde6ed2034e393f282c561d6cc515b78ca292e53063" Dec 05 16:16:04 crc kubenswrapper[4858]: I1205 16:16:04.282547 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerID="7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3" exitCode=0 Dec 05 16:16:04 crc kubenswrapper[4858]: I1205 16:16:04.282787 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerDied","Data":"7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3"} Dec 05 16:16:05 crc kubenswrapper[4858]: I1205 16:16:05.294743 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerStarted","Data":"ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81"} Dec 05 16:16:05 crc kubenswrapper[4858]: I1205 16:16:05.321084 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cg5ch" podStartSLOduration=2.5217646 podStartE2EDuration="5.320841644s" podCreationTimestamp="2025-12-05 16:16:00 +0000 UTC" firstStartedPulling="2025-12-05 16:16:02.066585963 +0000 UTC m=+8370.614184092" lastFinishedPulling="2025-12-05 16:16:04.865662997 +0000 UTC m=+8373.413261136" observedRunningTime="2025-12-05 16:16:05.320529816 +0000 UTC m=+8373.868127945" watchObservedRunningTime="2025-12-05 16:16:05.320841644 +0000 UTC m=+8373.868439783" Dec 05 16:16:10 crc kubenswrapper[4858]: I1205 16:16:10.604756 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:10 crc kubenswrapper[4858]: I1205 16:16:10.605488 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:10 crc kubenswrapper[4858]: I1205 16:16:10.663057 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:11 crc kubenswrapper[4858]: I1205 16:16:11.401436 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:11 crc kubenswrapper[4858]: I1205 16:16:11.471915 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cg5ch"] Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.364852 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cg5ch" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="registry-server" containerID="cri-o://ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81" gracePeriod=2 Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.908218 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.967585 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-catalog-content\") pod \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.967909 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb7kn\" (UniqueName: \"kubernetes.io/projected/d7bed814-9f9c-4cd2-81d5-ffa920201c73-kube-api-access-lb7kn\") pod \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.968052 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-utilities\") pod \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\" (UID: \"d7bed814-9f9c-4cd2-81d5-ffa920201c73\") " Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.968896 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-utilities" (OuterVolumeSpecName: "utilities") pod "d7bed814-9f9c-4cd2-81d5-ffa920201c73" (UID: "d7bed814-9f9c-4cd2-81d5-ffa920201c73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:16:13 crc kubenswrapper[4858]: I1205 16:16:13.977093 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7bed814-9f9c-4cd2-81d5-ffa920201c73-kube-api-access-lb7kn" (OuterVolumeSpecName: "kube-api-access-lb7kn") pod "d7bed814-9f9c-4cd2-81d5-ffa920201c73" (UID: "d7bed814-9f9c-4cd2-81d5-ffa920201c73"). InnerVolumeSpecName "kube-api-access-lb7kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.022260 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7bed814-9f9c-4cd2-81d5-ffa920201c73" (UID: "d7bed814-9f9c-4cd2-81d5-ffa920201c73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.070972 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.071006 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb7kn\" (UniqueName: \"kubernetes.io/projected/d7bed814-9f9c-4cd2-81d5-ffa920201c73-kube-api-access-lb7kn\") on node \"crc\" DevicePath \"\"" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.071016 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7bed814-9f9c-4cd2-81d5-ffa920201c73-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.376512 4858 generic.go:334] "Generic (PLEG): container finished" podID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerID="ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81" exitCode=0 Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.376570 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg5ch" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.376589 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerDied","Data":"ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81"} Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.376913 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg5ch" event={"ID":"d7bed814-9f9c-4cd2-81d5-ffa920201c73","Type":"ContainerDied","Data":"f20bb7c585c7f8aee45474d599a6976ebb80e6235e6b4104bc58f32a7815b805"} Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.377310 4858 scope.go:117] "RemoveContainer" containerID="ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.405789 4858 scope.go:117] "RemoveContainer" containerID="7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.418139 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cg5ch"] Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.431506 4858 scope.go:117] "RemoveContainer" containerID="786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.434232 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cg5ch"] Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.484565 4858 scope.go:117] "RemoveContainer" containerID="ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81" Dec 05 16:16:14 crc kubenswrapper[4858]: E1205 16:16:14.486135 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81\": container with ID starting with ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81 not found: ID does not exist" containerID="ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.486224 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81"} err="failed to get container status \"ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81\": rpc error: code = NotFound desc = could not find container \"ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81\": container with ID starting with ed09387d87bd83d47b1c140b7a44960d9c6330807964a486091ff53f1dc06b81 not found: ID does not exist" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.486261 4858 scope.go:117] "RemoveContainer" containerID="7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3" Dec 05 16:16:14 crc kubenswrapper[4858]: E1205 16:16:14.486669 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3\": container with ID starting with 7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3 not found: ID does not exist" containerID="7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.486702 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3"} err="failed to get container status \"7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3\": rpc error: code = NotFound desc = could not find container \"7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3\": container with ID starting with 7f2cebdada884aa4a7d56a600a6f6d1f187d6aa7b0d224e97b3140842ed7e2a3 not found: ID does not exist" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.486753 4858 scope.go:117] "RemoveContainer" containerID="786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32" Dec 05 16:16:14 crc kubenswrapper[4858]: E1205 16:16:14.487304 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32\": container with ID starting with 786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32 not found: ID does not exist" containerID="786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32" Dec 05 16:16:14 crc kubenswrapper[4858]: I1205 16:16:14.487330 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32"} err="failed to get container status \"786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32\": rpc error: code = NotFound desc = could not find container \"786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32\": container with ID starting with 786bf9a4a6d658c400783b2f1d30cdb6114dd6d8a3449d478dc1c638bd15da32 not found: ID does not exist" Dec 05 16:16:15 crc kubenswrapper[4858]: I1205 16:16:15.931301 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" path="/var/lib/kubelet/pods/d7bed814-9f9c-4cd2-81d5-ffa920201c73/volumes" Dec 05 16:16:16 crc kubenswrapper[4858]: I1205 16:16:16.899368 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:16:16 crc kubenswrapper[4858]: E1205 16:16:16.899622 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:16:28 crc kubenswrapper[4858]: I1205 16:16:28.900062 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:16:28 crc kubenswrapper[4858]: E1205 16:16:28.900874 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:16:40 crc kubenswrapper[4858]: I1205 16:16:40.899061 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:16:40 crc kubenswrapper[4858]: E1205 16:16:40.899803 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:16:51 crc kubenswrapper[4858]: I1205 16:16:51.906317 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:16:51 crc kubenswrapper[4858]: E1205 16:16:51.907336 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.235762 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dtmlt"] Dec 05 16:17:02 crc kubenswrapper[4858]: E1205 16:17:02.236666 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="extract-utilities" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.236679 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="extract-utilities" Dec 05 16:17:02 crc kubenswrapper[4858]: E1205 16:17:02.236697 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="extract-content" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.236704 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="extract-content" Dec 05 16:17:02 crc kubenswrapper[4858]: E1205 16:17:02.236752 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="registry-server" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.236760 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="registry-server" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.237001 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7bed814-9f9c-4cd2-81d5-ffa920201c73" containerName="registry-server" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.238417 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.248183 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc68m\" (UniqueName: \"kubernetes.io/projected/06dd2017-c30b-4481-98ef-f85d6df55cbb-kube-api-access-wc68m\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.248332 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-utilities\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.248518 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-catalog-content\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.251393 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtmlt"] Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.350008 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-catalog-content\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.350417 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc68m\" (UniqueName: \"kubernetes.io/projected/06dd2017-c30b-4481-98ef-f85d6df55cbb-kube-api-access-wc68m\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.350549 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-utilities\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.350686 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-catalog-content\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.350945 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-utilities\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.369530 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc68m\" (UniqueName: \"kubernetes.io/projected/06dd2017-c30b-4481-98ef-f85d6df55cbb-kube-api-access-wc68m\") pod \"redhat-operators-dtmlt\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:02 crc kubenswrapper[4858]: I1205 16:17:02.562267 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:03 crc kubenswrapper[4858]: I1205 16:17:03.073649 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtmlt"] Dec 05 16:17:03 crc kubenswrapper[4858]: I1205 16:17:03.857384 4858 generic.go:334] "Generic (PLEG): container finished" podID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerID="999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0" exitCode=0 Dec 05 16:17:03 crc kubenswrapper[4858]: I1205 16:17:03.857457 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerDied","Data":"999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0"} Dec 05 16:17:03 crc kubenswrapper[4858]: I1205 16:17:03.857749 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerStarted","Data":"d5fe7d01d6a06b776d7814ebd2e0880a5dc7b3445db9be4b8bcc43b1747400f0"} Dec 05 16:17:03 crc kubenswrapper[4858]: I1205 16:17:03.899991 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:17:03 crc kubenswrapper[4858]: E1205 16:17:03.900232 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:17:05 crc kubenswrapper[4858]: I1205 16:17:05.886533 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerStarted","Data":"4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2"} Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.640250 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sztzb"] Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.643390 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.665591 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sztzb"] Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.714706 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-utilities\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.714993 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbpbx\" (UniqueName: \"kubernetes.io/projected/72816059-ce3b-41f7-858b-6551fb97d7b8-kube-api-access-qbpbx\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.715482 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-catalog-content\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.817617 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-utilities\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.817712 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbpbx\" (UniqueName: \"kubernetes.io/projected/72816059-ce3b-41f7-858b-6551fb97d7b8-kube-api-access-qbpbx\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.817819 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-catalog-content\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.818178 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-utilities\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.818315 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-catalog-content\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.847704 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbpbx\" (UniqueName: \"kubernetes.io/projected/72816059-ce3b-41f7-858b-6551fb97d7b8-kube-api-access-qbpbx\") pod \"certified-operators-sztzb\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.905079 4858 generic.go:334] "Generic (PLEG): container finished" podID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerID="4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2" exitCode=0 Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.911919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerDied","Data":"4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2"} Dec 05 16:17:07 crc kubenswrapper[4858]: I1205 16:17:07.965223 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:08 crc kubenswrapper[4858]: I1205 16:17:08.628634 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sztzb"] Dec 05 16:17:08 crc kubenswrapper[4858]: I1205 16:17:08.919058 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerStarted","Data":"9adc9a15e597825ab0259e83a6b398b9219fb1c4eb8a3ad12cafc4a6c7a371d7"} Dec 05 16:17:09 crc kubenswrapper[4858]: I1205 16:17:09.929105 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerStarted","Data":"9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10"} Dec 05 16:17:09 crc kubenswrapper[4858]: I1205 16:17:09.931350 4858 generic.go:334] "Generic (PLEG): container finished" podID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerID="fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999" exitCode=0 Dec 05 16:17:09 crc kubenswrapper[4858]: I1205 16:17:09.931434 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerDied","Data":"fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999"} Dec 05 16:17:09 crc kubenswrapper[4858]: I1205 16:17:09.955349 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dtmlt" podStartSLOduration=2.452754339 podStartE2EDuration="7.955327724s" podCreationTimestamp="2025-12-05 16:17:02 +0000 UTC" firstStartedPulling="2025-12-05 16:17:03.859677225 +0000 UTC m=+8432.407275364" lastFinishedPulling="2025-12-05 16:17:09.36225061 +0000 UTC m=+8437.909848749" observedRunningTime="2025-12-05 16:17:09.948987732 +0000 UTC m=+8438.496585881" watchObservedRunningTime="2025-12-05 16:17:09.955327724 +0000 UTC m=+8438.502925863" Dec 05 16:17:10 crc kubenswrapper[4858]: I1205 16:17:10.958376 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerStarted","Data":"f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e"} Dec 05 16:17:12 crc kubenswrapper[4858]: I1205 16:17:12.562665 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:12 crc kubenswrapper[4858]: I1205 16:17:12.564081 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:12 crc kubenswrapper[4858]: I1205 16:17:12.976601 4858 generic.go:334] "Generic (PLEG): container finished" podID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerID="f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e" exitCode=0 Dec 05 16:17:12 crc kubenswrapper[4858]: I1205 16:17:12.976661 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerDied","Data":"f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e"} Dec 05 16:17:13 crc kubenswrapper[4858]: I1205 16:17:13.618988 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtmlt" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="registry-server" probeResult="failure" output=< Dec 05 16:17:13 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:17:13 crc kubenswrapper[4858]: > Dec 05 16:17:13 crc kubenswrapper[4858]: I1205 16:17:13.992544 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerStarted","Data":"6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc"} Dec 05 16:17:14 crc kubenswrapper[4858]: I1205 16:17:14.025089 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sztzb" podStartSLOduration=3.5970735019999998 podStartE2EDuration="7.025072577s" podCreationTimestamp="2025-12-05 16:17:07 +0000 UTC" firstStartedPulling="2025-12-05 16:17:09.932625841 +0000 UTC m=+8438.480223980" lastFinishedPulling="2025-12-05 16:17:13.360624916 +0000 UTC m=+8441.908223055" observedRunningTime="2025-12-05 16:17:14.013914167 +0000 UTC m=+8442.561512326" watchObservedRunningTime="2025-12-05 16:17:14.025072577 +0000 UTC m=+8442.572670716" Dec 05 16:17:16 crc kubenswrapper[4858]: I1205 16:17:16.898979 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:17:16 crc kubenswrapper[4858]: E1205 16:17:16.899461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:17:17 crc kubenswrapper[4858]: I1205 16:17:17.965386 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:17 crc kubenswrapper[4858]: I1205 16:17:17.965673 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:19 crc kubenswrapper[4858]: I1205 16:17:19.014987 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sztzb" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="registry-server" probeResult="failure" output=< Dec 05 16:17:19 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:17:19 crc kubenswrapper[4858]: > Dec 05 16:17:23 crc kubenswrapper[4858]: I1205 16:17:23.614422 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtmlt" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="registry-server" probeResult="failure" output=< Dec 05 16:17:23 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:17:23 crc kubenswrapper[4858]: > Dec 05 16:17:28 crc kubenswrapper[4858]: I1205 16:17:28.023597 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:28 crc kubenswrapper[4858]: I1205 16:17:28.089491 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:28 crc kubenswrapper[4858]: I1205 16:17:28.263021 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sztzb"] Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.128437 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sztzb" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="registry-server" containerID="cri-o://6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc" gracePeriod=2 Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.775348 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.901375 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:17:29 crc kubenswrapper[4858]: E1205 16:17:29.902028 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.973080 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-catalog-content\") pod \"72816059-ce3b-41f7-858b-6551fb97d7b8\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.973138 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-utilities\") pod \"72816059-ce3b-41f7-858b-6551fb97d7b8\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.973524 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbpbx\" (UniqueName: \"kubernetes.io/projected/72816059-ce3b-41f7-858b-6551fb97d7b8-kube-api-access-qbpbx\") pod \"72816059-ce3b-41f7-858b-6551fb97d7b8\" (UID: \"72816059-ce3b-41f7-858b-6551fb97d7b8\") " Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.974091 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-utilities" (OuterVolumeSpecName: "utilities") pod "72816059-ce3b-41f7-858b-6551fb97d7b8" (UID: "72816059-ce3b-41f7-858b-6551fb97d7b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.975566 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:29 crc kubenswrapper[4858]: I1205 16:17:29.989199 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72816059-ce3b-41f7-858b-6551fb97d7b8-kube-api-access-qbpbx" (OuterVolumeSpecName: "kube-api-access-qbpbx") pod "72816059-ce3b-41f7-858b-6551fb97d7b8" (UID: "72816059-ce3b-41f7-858b-6551fb97d7b8"). InnerVolumeSpecName "kube-api-access-qbpbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.027573 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72816059-ce3b-41f7-858b-6551fb97d7b8" (UID: "72816059-ce3b-41f7-858b-6551fb97d7b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.077354 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72816059-ce3b-41f7-858b-6551fb97d7b8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.077415 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbpbx\" (UniqueName: \"kubernetes.io/projected/72816059-ce3b-41f7-858b-6551fb97d7b8-kube-api-access-qbpbx\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.137223 4858 generic.go:334] "Generic (PLEG): container finished" podID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerID="6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc" exitCode=0 Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.137267 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerDied","Data":"6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc"} Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.137302 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sztzb" event={"ID":"72816059-ce3b-41f7-858b-6551fb97d7b8","Type":"ContainerDied","Data":"9adc9a15e597825ab0259e83a6b398b9219fb1c4eb8a3ad12cafc4a6c7a371d7"} Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.137302 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sztzb" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.137320 4858 scope.go:117] "RemoveContainer" containerID="6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.166016 4858 scope.go:117] "RemoveContainer" containerID="f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.171735 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sztzb"] Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.184306 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sztzb"] Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.186184 4858 scope.go:117] "RemoveContainer" containerID="fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.241381 4858 scope.go:117] "RemoveContainer" containerID="6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc" Dec 05 16:17:30 crc kubenswrapper[4858]: E1205 16:17:30.248802 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc\": container with ID starting with 6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc not found: ID does not exist" containerID="6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.248894 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc"} err="failed to get container status \"6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc\": rpc error: code = NotFound desc = could not find container \"6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc\": container with ID starting with 6b3dcdfeb190a92b584df5917b9aa3cea9e5092b37d78694cd595de5e415f4fc not found: ID does not exist" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.248926 4858 scope.go:117] "RemoveContainer" containerID="f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e" Dec 05 16:17:30 crc kubenswrapper[4858]: E1205 16:17:30.249499 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e\": container with ID starting with f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e not found: ID does not exist" containerID="f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.249677 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e"} err="failed to get container status \"f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e\": rpc error: code = NotFound desc = could not find container \"f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e\": container with ID starting with f3be33018702cde675369290e7cd49b96ece335595507ae074d9cfdfe45ae11e not found: ID does not exist" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.249866 4858 scope.go:117] "RemoveContainer" containerID="fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999" Dec 05 16:17:30 crc kubenswrapper[4858]: E1205 16:17:30.250338 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999\": container with ID starting with fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999 not found: ID does not exist" containerID="fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999" Dec 05 16:17:30 crc kubenswrapper[4858]: I1205 16:17:30.250375 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999"} err="failed to get container status \"fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999\": rpc error: code = NotFound desc = could not find container \"fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999\": container with ID starting with fa7f4b20307549fd2e5120f4d47a201d3b73951cd86968b0a22c5f6ab6e8a999 not found: ID does not exist" Dec 05 16:17:31 crc kubenswrapper[4858]: I1205 16:17:31.916419 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" path="/var/lib/kubelet/pods/72816059-ce3b-41f7-858b-6551fb97d7b8/volumes" Dec 05 16:17:32 crc kubenswrapper[4858]: I1205 16:17:32.611443 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:32 crc kubenswrapper[4858]: I1205 16:17:32.661997 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:33 crc kubenswrapper[4858]: I1205 16:17:33.658294 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtmlt"] Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.175739 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dtmlt" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="registry-server" containerID="cri-o://9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10" gracePeriod=2 Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.680494 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.862112 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-catalog-content\") pod \"06dd2017-c30b-4481-98ef-f85d6df55cbb\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.862229 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc68m\" (UniqueName: \"kubernetes.io/projected/06dd2017-c30b-4481-98ef-f85d6df55cbb-kube-api-access-wc68m\") pod \"06dd2017-c30b-4481-98ef-f85d6df55cbb\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.862285 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-utilities\") pod \"06dd2017-c30b-4481-98ef-f85d6df55cbb\" (UID: \"06dd2017-c30b-4481-98ef-f85d6df55cbb\") " Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.863546 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-utilities" (OuterVolumeSpecName: "utilities") pod "06dd2017-c30b-4481-98ef-f85d6df55cbb" (UID: "06dd2017-c30b-4481-98ef-f85d6df55cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.867040 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.881064 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06dd2017-c30b-4481-98ef-f85d6df55cbb-kube-api-access-wc68m" (OuterVolumeSpecName: "kube-api-access-wc68m") pod "06dd2017-c30b-4481-98ef-f85d6df55cbb" (UID: "06dd2017-c30b-4481-98ef-f85d6df55cbb"). InnerVolumeSpecName "kube-api-access-wc68m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.968149 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc68m\" (UniqueName: \"kubernetes.io/projected/06dd2017-c30b-4481-98ef-f85d6df55cbb-kube-api-access-wc68m\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:34 crc kubenswrapper[4858]: I1205 16:17:34.970490 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06dd2017-c30b-4481-98ef-f85d6df55cbb" (UID: "06dd2017-c30b-4481-98ef-f85d6df55cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.068879 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06dd2017-c30b-4481-98ef-f85d6df55cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.187275 4858 generic.go:334] "Generic (PLEG): container finished" podID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerID="9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10" exitCode=0 Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.187322 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerDied","Data":"9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10"} Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.187336 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtmlt" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.187359 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtmlt" event={"ID":"06dd2017-c30b-4481-98ef-f85d6df55cbb","Type":"ContainerDied","Data":"d5fe7d01d6a06b776d7814ebd2e0880a5dc7b3445db9be4b8bcc43b1747400f0"} Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.187377 4858 scope.go:117] "RemoveContainer" containerID="9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.220255 4858 scope.go:117] "RemoveContainer" containerID="4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.230318 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtmlt"] Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.238781 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dtmlt"] Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.253430 4858 scope.go:117] "RemoveContainer" containerID="999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.304209 4858 scope.go:117] "RemoveContainer" containerID="9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10" Dec 05 16:17:35 crc kubenswrapper[4858]: E1205 16:17:35.305025 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10\": container with ID starting with 9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10 not found: ID does not exist" containerID="9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.305088 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10"} err="failed to get container status \"9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10\": rpc error: code = NotFound desc = could not find container \"9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10\": container with ID starting with 9f8bb21720c1b6e3c851bf41066c40b583f910d504a68e770a50f4bc5a0baf10 not found: ID does not exist" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.305121 4858 scope.go:117] "RemoveContainer" containerID="4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2" Dec 05 16:17:35 crc kubenswrapper[4858]: E1205 16:17:35.305569 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2\": container with ID starting with 4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2 not found: ID does not exist" containerID="4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.305771 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2"} err="failed to get container status \"4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2\": rpc error: code = NotFound desc = could not find container \"4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2\": container with ID starting with 4e4a3dd4fecf1516b569eb8e4161c3528c8590f35c021bbd6b466fabbcc7b6b2 not found: ID does not exist" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.306067 4858 scope.go:117] "RemoveContainer" containerID="999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0" Dec 05 16:17:35 crc kubenswrapper[4858]: E1205 16:17:35.306745 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0\": container with ID starting with 999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0 not found: ID does not exist" containerID="999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.306777 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0"} err="failed to get container status \"999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0\": rpc error: code = NotFound desc = could not find container \"999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0\": container with ID starting with 999b6e0c73703c357caa98749d6aa5835b8e8e3ec5e7cd8e7d7f277ba0c8a3b0 not found: ID does not exist" Dec 05 16:17:35 crc kubenswrapper[4858]: I1205 16:17:35.911385 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" path="/var/lib/kubelet/pods/06dd2017-c30b-4481-98ef-f85d6df55cbb/volumes" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.484426 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tdpn2"] Dec 05 16:17:42 crc kubenswrapper[4858]: E1205 16:17:42.485428 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="extract-content" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489303 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="extract-content" Dec 05 16:17:42 crc kubenswrapper[4858]: E1205 16:17:42.489391 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="registry-server" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489401 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="registry-server" Dec 05 16:17:42 crc kubenswrapper[4858]: E1205 16:17:42.489420 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="registry-server" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489428 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="registry-server" Dec 05 16:17:42 crc kubenswrapper[4858]: E1205 16:17:42.489447 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="extract-content" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489453 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="extract-content" Dec 05 16:17:42 crc kubenswrapper[4858]: E1205 16:17:42.489464 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="extract-utilities" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489472 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="extract-utilities" Dec 05 16:17:42 crc kubenswrapper[4858]: E1205 16:17:42.489488 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="extract-utilities" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489497 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="extract-utilities" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489863 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="06dd2017-c30b-4481-98ef-f85d6df55cbb" containerName="registry-server" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.489889 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="72816059-ce3b-41f7-858b-6551fb97d7b8" containerName="registry-server" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.492095 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.496388 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdpn2"] Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.600697 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbf4d\" (UniqueName: \"kubernetes.io/projected/2222d176-ec89-4738-9576-71af0b4b567d-kube-api-access-vbf4d\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.601374 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-catalog-content\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.601510 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-utilities\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.702895 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-utilities\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.702978 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbf4d\" (UniqueName: \"kubernetes.io/projected/2222d176-ec89-4738-9576-71af0b4b567d-kube-api-access-vbf4d\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.703047 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-catalog-content\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.703844 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-utilities\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.704447 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-catalog-content\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.722323 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbf4d\" (UniqueName: \"kubernetes.io/projected/2222d176-ec89-4738-9576-71af0b4b567d-kube-api-access-vbf4d\") pod \"redhat-marketplace-tdpn2\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:42 crc kubenswrapper[4858]: I1205 16:17:42.811579 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:43 crc kubenswrapper[4858]: I1205 16:17:43.409085 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdpn2"] Dec 05 16:17:44 crc kubenswrapper[4858]: I1205 16:17:44.261413 4858 generic.go:334] "Generic (PLEG): container finished" podID="2222d176-ec89-4738-9576-71af0b4b567d" containerID="3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242" exitCode=0 Dec 05 16:17:44 crc kubenswrapper[4858]: I1205 16:17:44.261567 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerDied","Data":"3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242"} Dec 05 16:17:44 crc kubenswrapper[4858]: I1205 16:17:44.261687 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerStarted","Data":"9e8f98f227f08651aa6edc65543aaf72547206afe71278f9d999ad5d1910d232"} Dec 05 16:17:44 crc kubenswrapper[4858]: I1205 16:17:44.899124 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:17:45 crc kubenswrapper[4858]: I1205 16:17:45.270799 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerStarted","Data":"f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49"} Dec 05 16:17:45 crc kubenswrapper[4858]: I1205 16:17:45.280944 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"6f9b9f4dd5c8770a3c011867259a569fb2e678a72cc8ad4dbdf2917858af9eb2"} Dec 05 16:17:46 crc kubenswrapper[4858]: I1205 16:17:46.292881 4858 generic.go:334] "Generic (PLEG): container finished" podID="2222d176-ec89-4738-9576-71af0b4b567d" containerID="f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49" exitCode=0 Dec 05 16:17:46 crc kubenswrapper[4858]: I1205 16:17:46.292967 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerDied","Data":"f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49"} Dec 05 16:17:47 crc kubenswrapper[4858]: I1205 16:17:47.305847 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerStarted","Data":"58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb"} Dec 05 16:17:47 crc kubenswrapper[4858]: I1205 16:17:47.331397 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tdpn2" podStartSLOduration=2.867298043 podStartE2EDuration="5.330056271s" podCreationTimestamp="2025-12-05 16:17:42 +0000 UTC" firstStartedPulling="2025-12-05 16:17:44.264856707 +0000 UTC m=+8472.812454846" lastFinishedPulling="2025-12-05 16:17:46.727614935 +0000 UTC m=+8475.275213074" observedRunningTime="2025-12-05 16:17:47.322782024 +0000 UTC m=+8475.870380183" watchObservedRunningTime="2025-12-05 16:17:47.330056271 +0000 UTC m=+8475.877654410" Dec 05 16:17:52 crc kubenswrapper[4858]: I1205 16:17:52.812473 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:52 crc kubenswrapper[4858]: I1205 16:17:52.813011 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:52 crc kubenswrapper[4858]: I1205 16:17:52.859702 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:53 crc kubenswrapper[4858]: I1205 16:17:53.409718 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:53 crc kubenswrapper[4858]: I1205 16:17:53.462219 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdpn2"] Dec 05 16:17:55 crc kubenswrapper[4858]: I1205 16:17:55.377081 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tdpn2" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="registry-server" containerID="cri-o://58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb" gracePeriod=2 Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.090354 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.195703 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-catalog-content\") pod \"2222d176-ec89-4738-9576-71af0b4b567d\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.195918 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-utilities\") pod \"2222d176-ec89-4738-9576-71af0b4b567d\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.195962 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbf4d\" (UniqueName: \"kubernetes.io/projected/2222d176-ec89-4738-9576-71af0b4b567d-kube-api-access-vbf4d\") pod \"2222d176-ec89-4738-9576-71af0b4b567d\" (UID: \"2222d176-ec89-4738-9576-71af0b4b567d\") " Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.198873 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-utilities" (OuterVolumeSpecName: "utilities") pod "2222d176-ec89-4738-9576-71af0b4b567d" (UID: "2222d176-ec89-4738-9576-71af0b4b567d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.210194 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2222d176-ec89-4738-9576-71af0b4b567d" (UID: "2222d176-ec89-4738-9576-71af0b4b567d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.211605 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2222d176-ec89-4738-9576-71af0b4b567d-kube-api-access-vbf4d" (OuterVolumeSpecName: "kube-api-access-vbf4d") pod "2222d176-ec89-4738-9576-71af0b4b567d" (UID: "2222d176-ec89-4738-9576-71af0b4b567d"). InnerVolumeSpecName "kube-api-access-vbf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.297682 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.297715 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2222d176-ec89-4738-9576-71af0b4b567d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.297724 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbf4d\" (UniqueName: \"kubernetes.io/projected/2222d176-ec89-4738-9576-71af0b4b567d-kube-api-access-vbf4d\") on node \"crc\" DevicePath \"\"" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.390988 4858 generic.go:334] "Generic (PLEG): container finished" podID="2222d176-ec89-4738-9576-71af0b4b567d" containerID="58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb" exitCode=0 Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.391048 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerDied","Data":"58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb"} Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.391115 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdpn2" event={"ID":"2222d176-ec89-4738-9576-71af0b4b567d","Type":"ContainerDied","Data":"9e8f98f227f08651aa6edc65543aaf72547206afe71278f9d999ad5d1910d232"} Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.391068 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdpn2" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.391531 4858 scope.go:117] "RemoveContainer" containerID="58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.421800 4858 scope.go:117] "RemoveContainer" containerID="f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.436656 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdpn2"] Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.464720 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdpn2"] Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.467434 4858 scope.go:117] "RemoveContainer" containerID="3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.507187 4858 scope.go:117] "RemoveContainer" containerID="58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb" Dec 05 16:17:56 crc kubenswrapper[4858]: E1205 16:17:56.508433 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb\": container with ID starting with 58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb not found: ID does not exist" containerID="58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.508657 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb"} err="failed to get container status \"58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb\": rpc error: code = NotFound desc = could not find container \"58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb\": container with ID starting with 58e29e8c8d3318a9be7d5010a1fd400bb542ef401f6822d44c1439d5e8f2e7fb not found: ID does not exist" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.508687 4858 scope.go:117] "RemoveContainer" containerID="f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49" Dec 05 16:17:56 crc kubenswrapper[4858]: E1205 16:17:56.509103 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49\": container with ID starting with f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49 not found: ID does not exist" containerID="f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.509126 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49"} err="failed to get container status \"f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49\": rpc error: code = NotFound desc = could not find container \"f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49\": container with ID starting with f478a78f38c37b6d81c5fe519c9ce2181469c23e23fd99b9b795764208470a49 not found: ID does not exist" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.509140 4858 scope.go:117] "RemoveContainer" containerID="3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242" Dec 05 16:17:56 crc kubenswrapper[4858]: E1205 16:17:56.509994 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242\": container with ID starting with 3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242 not found: ID does not exist" containerID="3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242" Dec 05 16:17:56 crc kubenswrapper[4858]: I1205 16:17:56.510034 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242"} err="failed to get container status \"3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242\": rpc error: code = NotFound desc = could not find container \"3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242\": container with ID starting with 3d0159b8be6107be3fa7b1f24061bdf07f5c9aa94f5fcb43bce9420e81e39242 not found: ID does not exist" Dec 05 16:17:57 crc kubenswrapper[4858]: I1205 16:17:57.910649 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2222d176-ec89-4738-9576-71af0b4b567d" path="/var/lib/kubelet/pods/2222d176-ec89-4738-9576-71af0b4b567d/volumes" Dec 05 16:20:14 crc kubenswrapper[4858]: I1205 16:20:14.760076 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:20:14 crc kubenswrapper[4858]: I1205 16:20:14.761116 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:20:44 crc kubenswrapper[4858]: I1205 16:20:44.760207 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:20:44 crc kubenswrapper[4858]: I1205 16:20:44.760788 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:21:14 crc kubenswrapper[4858]: I1205 16:21:14.760376 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:21:14 crc kubenswrapper[4858]: I1205 16:21:14.760995 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:21:14 crc kubenswrapper[4858]: I1205 16:21:14.761051 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 16:21:14 crc kubenswrapper[4858]: I1205 16:21:14.762061 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f9b9f4dd5c8770a3c011867259a569fb2e678a72cc8ad4dbdf2917858af9eb2"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 16:21:14 crc kubenswrapper[4858]: I1205 16:21:14.762136 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://6f9b9f4dd5c8770a3c011867259a569fb2e678a72cc8ad4dbdf2917858af9eb2" gracePeriod=600 Dec 05 16:21:15 crc kubenswrapper[4858]: I1205 16:21:15.270948 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="6f9b9f4dd5c8770a3c011867259a569fb2e678a72cc8ad4dbdf2917858af9eb2" exitCode=0 Dec 05 16:21:15 crc kubenswrapper[4858]: I1205 16:21:15.271033 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"6f9b9f4dd5c8770a3c011867259a569fb2e678a72cc8ad4dbdf2917858af9eb2"} Dec 05 16:21:15 crc kubenswrapper[4858]: I1205 16:21:15.271587 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501"} Dec 05 16:21:15 crc kubenswrapper[4858]: I1205 16:21:15.271680 4858 scope.go:117] "RemoveContainer" containerID="045ecacc23d459bcf26418cc9e292f867dd002da8ed087ad2ddc3ad9e134dcf3" Dec 05 16:23:44 crc kubenswrapper[4858]: I1205 16:23:44.759619 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:23:44 crc kubenswrapper[4858]: I1205 16:23:44.761404 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:24:14 crc kubenswrapper[4858]: I1205 16:24:14.760628 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:24:14 crc kubenswrapper[4858]: I1205 16:24:14.762559 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:24:44 crc kubenswrapper[4858]: I1205 16:24:44.759926 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:24:44 crc kubenswrapper[4858]: I1205 16:24:44.760469 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 16:24:44 crc kubenswrapper[4858]: I1205 16:24:44.760522 4858 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" Dec 05 16:24:44 crc kubenswrapper[4858]: I1205 16:24:44.761393 4858 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501"} pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 16:24:44 crc kubenswrapper[4858]: I1205 16:24:44.761448 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" containerID="cri-o://cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" gracePeriod=600 Dec 05 16:24:44 crc kubenswrapper[4858]: E1205 16:24:44.885257 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:24:45 crc kubenswrapper[4858]: I1205 16:24:45.131009 4858 generic.go:334] "Generic (PLEG): container finished" podID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" exitCode=0 Dec 05 16:24:45 crc kubenswrapper[4858]: I1205 16:24:45.131128 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerDied","Data":"cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501"} Dec 05 16:24:45 crc kubenswrapper[4858]: I1205 16:24:45.131245 4858 scope.go:117] "RemoveContainer" containerID="6f9b9f4dd5c8770a3c011867259a569fb2e678a72cc8ad4dbdf2917858af9eb2" Dec 05 16:24:45 crc kubenswrapper[4858]: I1205 16:24:45.133206 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:24:45 crc kubenswrapper[4858]: E1205 16:24:45.136704 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:24:58 crc kubenswrapper[4858]: I1205 16:24:58.899267 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:24:58 crc kubenswrapper[4858]: E1205 16:24:58.900127 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:25:10 crc kubenswrapper[4858]: I1205 16:25:10.900195 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:25:10 crc kubenswrapper[4858]: E1205 16:25:10.902432 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:25:23 crc kubenswrapper[4858]: I1205 16:25:23.899577 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:25:23 crc kubenswrapper[4858]: E1205 16:25:23.900396 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:25:37 crc kubenswrapper[4858]: I1205 16:25:37.899479 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:25:37 crc kubenswrapper[4858]: E1205 16:25:37.900266 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:25:52 crc kubenswrapper[4858]: I1205 16:25:52.899900 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:25:52 crc kubenswrapper[4858]: E1205 16:25:52.900594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:26:07 crc kubenswrapper[4858]: I1205 16:26:07.899845 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:26:07 crc kubenswrapper[4858]: E1205 16:26:07.901672 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:26:20 crc kubenswrapper[4858]: I1205 16:26:20.899566 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:26:20 crc kubenswrapper[4858]: E1205 16:26:20.900268 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:26:31 crc kubenswrapper[4858]: I1205 16:26:31.906022 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:26:31 crc kubenswrapper[4858]: E1205 16:26:31.906804 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:26:42 crc kubenswrapper[4858]: I1205 16:26:42.899704 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:26:42 crc kubenswrapper[4858]: E1205 16:26:42.900560 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:26:54 crc kubenswrapper[4858]: I1205 16:26:54.898982 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:26:54 crc kubenswrapper[4858]: E1205 16:26:54.899727 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.920519 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x42mk"] Dec 05 16:27:05 crc kubenswrapper[4858]: E1205 16:27:05.930811 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="extract-utilities" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.931079 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="extract-utilities" Dec 05 16:27:05 crc kubenswrapper[4858]: E1205 16:27:05.931202 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="extract-content" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.931280 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="extract-content" Dec 05 16:27:05 crc kubenswrapper[4858]: E1205 16:27:05.931367 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="registry-server" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.931437 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="registry-server" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.932055 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2222d176-ec89-4738-9576-71af0b4b567d" containerName="registry-server" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.935182 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:05 crc kubenswrapper[4858]: I1205 16:27:05.942751 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x42mk"] Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.059773 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-catalog-content\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.060116 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-utilities\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.060160 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rrk\" (UniqueName: \"kubernetes.io/projected/f5534599-bdf8-4438-9399-4b21e060472d-kube-api-access-s9rrk\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.163255 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-catalog-content\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.163522 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-utilities\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.163577 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9rrk\" (UniqueName: \"kubernetes.io/projected/f5534599-bdf8-4438-9399-4b21e060472d-kube-api-access-s9rrk\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.164030 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-catalog-content\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.165074 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-utilities\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.185064 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9rrk\" (UniqueName: \"kubernetes.io/projected/f5534599-bdf8-4438-9399-4b21e060472d-kube-api-access-s9rrk\") pod \"redhat-operators-x42mk\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:06 crc kubenswrapper[4858]: I1205 16:27:06.274917 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:07 crc kubenswrapper[4858]: I1205 16:27:07.118104 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x42mk"] Dec 05 16:27:07 crc kubenswrapper[4858]: I1205 16:27:07.395847 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5534599-bdf8-4438-9399-4b21e060472d" containerID="f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7" exitCode=0 Dec 05 16:27:07 crc kubenswrapper[4858]: I1205 16:27:07.395915 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerDied","Data":"f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7"} Dec 05 16:27:07 crc kubenswrapper[4858]: I1205 16:27:07.396115 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerStarted","Data":"fa7415d6e20e0a7e8eb7109d10ed5ef4970f3b76d7bdd9b841da86ea66020c6d"} Dec 05 16:27:07 crc kubenswrapper[4858]: I1205 16:27:07.399105 4858 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 16:27:08 crc kubenswrapper[4858]: I1205 16:27:08.845546 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerStarted","Data":"758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc"} Dec 05 16:27:09 crc kubenswrapper[4858]: I1205 16:27:09.911956 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:27:09 crc kubenswrapper[4858]: E1205 16:27:09.912536 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:27:13 crc kubenswrapper[4858]: I1205 16:27:13.169133 4858 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.865026628s: [/var/lib/containers/storage/overlay/5c84a6268f702ad267dca7007b22e11db22991e98d1fd61f2c66b8cd308ff038/diff /var/log/pods/openstack_glance-default-external-api-0_ebad303f-6b9b-4ae1-b012-0862a6280179/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Dec 05 16:27:13 crc kubenswrapper[4858]: I1205 16:27:13.215272 4858 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 16:27:13 crc kubenswrapper[4858]: I1205 16:27:13.216079 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.361292 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7prnm"] Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.372035 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.385127 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7prnm"] Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.401501 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7676f\" (UniqueName: \"kubernetes.io/projected/acf9f81d-858c-4c52-862b-b952ed0fea88-kube-api-access-7676f\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.401668 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-utilities\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.401694 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-catalog-content\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.503245 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-utilities\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.503297 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-catalog-content\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.503501 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7676f\" (UniqueName: \"kubernetes.io/projected/acf9f81d-858c-4c52-862b-b952ed0fea88-kube-api-access-7676f\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.504222 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-utilities\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.505370 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5534599-bdf8-4438-9399-4b21e060472d" containerID="758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc" exitCode=0 Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.505412 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerDied","Data":"758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc"} Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.505869 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-catalog-content\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.544277 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7676f\" (UniqueName: \"kubernetes.io/projected/acf9f81d-858c-4c52-862b-b952ed0fea88-kube-api-access-7676f\") pod \"certified-operators-7prnm\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:17 crc kubenswrapper[4858]: I1205 16:27:17.698233 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:18 crc kubenswrapper[4858]: I1205 16:27:18.437274 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7prnm"] Dec 05 16:27:18 crc kubenswrapper[4858]: I1205 16:27:18.517073 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerStarted","Data":"31c531ee6268fb124c531b499ec97d5e33447c03bd35e3d4205e216c9aecc4e6"} Dec 05 16:27:19 crc kubenswrapper[4858]: I1205 16:27:19.529198 4858 generic.go:334] "Generic (PLEG): container finished" podID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerID="fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c" exitCode=0 Dec 05 16:27:19 crc kubenswrapper[4858]: I1205 16:27:19.529635 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerDied","Data":"fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c"} Dec 05 16:27:19 crc kubenswrapper[4858]: I1205 16:27:19.535919 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerStarted","Data":"412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c"} Dec 05 16:27:19 crc kubenswrapper[4858]: I1205 16:27:19.583090 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x42mk" podStartSLOduration=3.909941501 podStartE2EDuration="14.578987964s" podCreationTimestamp="2025-12-05 16:27:05 +0000 UTC" firstStartedPulling="2025-12-05 16:27:07.397715443 +0000 UTC m=+9035.945313582" lastFinishedPulling="2025-12-05 16:27:18.066761916 +0000 UTC m=+9046.614360045" observedRunningTime="2025-12-05 16:27:19.575914721 +0000 UTC m=+9048.123512860" watchObservedRunningTime="2025-12-05 16:27:19.578987964 +0000 UTC m=+9048.126586103" Dec 05 16:27:20 crc kubenswrapper[4858]: I1205 16:27:20.547845 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerStarted","Data":"0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c"} Dec 05 16:27:21 crc kubenswrapper[4858]: I1205 16:27:21.912370 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:27:21 crc kubenswrapper[4858]: E1205 16:27:21.912612 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.587217 4858 generic.go:334] "Generic (PLEG): container finished" podID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerID="0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c" exitCode=0 Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.587286 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerDied","Data":"0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c"} Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.653580 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fb5qs"] Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.658117 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.667456 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fb5qs"] Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.761050 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-utilities\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.761125 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8phk\" (UniqueName: \"kubernetes.io/projected/51419d9f-21f9-48d4-8bdf-d783a73fbc66-kube-api-access-w8phk\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.761241 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-catalog-content\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.862586 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-catalog-content\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.863097 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-utilities\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.863300 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8phk\" (UniqueName: \"kubernetes.io/projected/51419d9f-21f9-48d4-8bdf-d783a73fbc66-kube-api-access-w8phk\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.864584 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-catalog-content\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.864892 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-utilities\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:23 crc kubenswrapper[4858]: I1205 16:27:23.914792 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8phk\" (UniqueName: \"kubernetes.io/projected/51419d9f-21f9-48d4-8bdf-d783a73fbc66-kube-api-access-w8phk\") pod \"community-operators-fb5qs\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:24 crc kubenswrapper[4858]: I1205 16:27:24.017750 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:24 crc kubenswrapper[4858]: I1205 16:27:24.615372 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerStarted","Data":"62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845"} Dec 05 16:27:24 crc kubenswrapper[4858]: I1205 16:27:24.638198 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7prnm" podStartSLOduration=3.099093458 podStartE2EDuration="7.638180758s" podCreationTimestamp="2025-12-05 16:27:17 +0000 UTC" firstStartedPulling="2025-12-05 16:27:19.531605744 +0000 UTC m=+9048.079203883" lastFinishedPulling="2025-12-05 16:27:24.070693044 +0000 UTC m=+9052.618291183" observedRunningTime="2025-12-05 16:27:24.636718128 +0000 UTC m=+9053.184316267" watchObservedRunningTime="2025-12-05 16:27:24.638180758 +0000 UTC m=+9053.185778897" Dec 05 16:27:24 crc kubenswrapper[4858]: I1205 16:27:24.833448 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fb5qs"] Dec 05 16:27:24 crc kubenswrapper[4858]: W1205 16:27:24.849869 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51419d9f_21f9_48d4_8bdf_d783a73fbc66.slice/crio-31eee0df96126161c1916fb3f5aaa04fcf2e18d460ec4607e57cb6fafb1bd740 WatchSource:0}: Error finding container 31eee0df96126161c1916fb3f5aaa04fcf2e18d460ec4607e57cb6fafb1bd740: Status 404 returned error can't find the container with id 31eee0df96126161c1916fb3f5aaa04fcf2e18d460ec4607e57cb6fafb1bd740 Dec 05 16:27:25 crc kubenswrapper[4858]: I1205 16:27:25.626134 4858 generic.go:334] "Generic (PLEG): container finished" podID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerID="29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b" exitCode=0 Dec 05 16:27:25 crc kubenswrapper[4858]: I1205 16:27:25.626239 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerDied","Data":"29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b"} Dec 05 16:27:25 crc kubenswrapper[4858]: I1205 16:27:25.626450 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerStarted","Data":"31eee0df96126161c1916fb3f5aaa04fcf2e18d460ec4607e57cb6fafb1bd740"} Dec 05 16:27:26 crc kubenswrapper[4858]: I1205 16:27:26.275795 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:26 crc kubenswrapper[4858]: I1205 16:27:26.276214 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:26 crc kubenswrapper[4858]: I1205 16:27:26.639427 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerStarted","Data":"c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845"} Dec 05 16:27:27 crc kubenswrapper[4858]: I1205 16:27:27.348554 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x42mk" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" probeResult="failure" output=< Dec 05 16:27:27 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:27:27 crc kubenswrapper[4858]: > Dec 05 16:27:27 crc kubenswrapper[4858]: I1205 16:27:27.648131 4858 generic.go:334] "Generic (PLEG): container finished" podID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerID="c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845" exitCode=0 Dec 05 16:27:27 crc kubenswrapper[4858]: I1205 16:27:27.648172 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerDied","Data":"c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845"} Dec 05 16:27:27 crc kubenswrapper[4858]: I1205 16:27:27.699200 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:27 crc kubenswrapper[4858]: I1205 16:27:27.699247 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:28 crc kubenswrapper[4858]: I1205 16:27:28.749236 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7prnm" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="registry-server" probeResult="failure" output=< Dec 05 16:27:28 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:27:28 crc kubenswrapper[4858]: > Dec 05 16:27:29 crc kubenswrapper[4858]: I1205 16:27:29.668726 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerStarted","Data":"54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4"} Dec 05 16:27:29 crc kubenswrapper[4858]: I1205 16:27:29.692670 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fb5qs" podStartSLOduration=3.170292094 podStartE2EDuration="6.692650043s" podCreationTimestamp="2025-12-05 16:27:23 +0000 UTC" firstStartedPulling="2025-12-05 16:27:25.627816566 +0000 UTC m=+9054.175414715" lastFinishedPulling="2025-12-05 16:27:29.150174525 +0000 UTC m=+9057.697772664" observedRunningTime="2025-12-05 16:27:29.689342083 +0000 UTC m=+9058.236940242" watchObservedRunningTime="2025-12-05 16:27:29.692650043 +0000 UTC m=+9058.240248182" Dec 05 16:27:34 crc kubenswrapper[4858]: I1205 16:27:34.018681 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:34 crc kubenswrapper[4858]: I1205 16:27:34.019370 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:34 crc kubenswrapper[4858]: I1205 16:27:34.900571 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:27:34 crc kubenswrapper[4858]: E1205 16:27:34.901333 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:27:35 crc kubenswrapper[4858]: I1205 16:27:35.110398 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fb5qs" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="registry-server" probeResult="failure" output=< Dec 05 16:27:35 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:27:35 crc kubenswrapper[4858]: > Dec 05 16:27:37 crc kubenswrapper[4858]: I1205 16:27:37.323152 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x42mk" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" probeResult="failure" output=< Dec 05 16:27:37 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:27:37 crc kubenswrapper[4858]: > Dec 05 16:27:38 crc kubenswrapper[4858]: I1205 16:27:38.749849 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7prnm" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="registry-server" probeResult="failure" output=< Dec 05 16:27:38 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:27:38 crc kubenswrapper[4858]: > Dec 05 16:27:44 crc kubenswrapper[4858]: I1205 16:27:44.086592 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:44 crc kubenswrapper[4858]: I1205 16:27:44.218414 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:44 crc kubenswrapper[4858]: I1205 16:27:44.345110 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fb5qs"] Dec 05 16:27:45 crc kubenswrapper[4858]: I1205 16:27:45.804568 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fb5qs" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="registry-server" containerID="cri-o://54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4" gracePeriod=2 Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.526962 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.644970 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-catalog-content\") pod \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.645098 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-utilities\") pod \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.645257 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8phk\" (UniqueName: \"kubernetes.io/projected/51419d9f-21f9-48d4-8bdf-d783a73fbc66-kube-api-access-w8phk\") pod \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\" (UID: \"51419d9f-21f9-48d4-8bdf-d783a73fbc66\") " Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.647059 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-utilities" (OuterVolumeSpecName: "utilities") pod "51419d9f-21f9-48d4-8bdf-d783a73fbc66" (UID: "51419d9f-21f9-48d4-8bdf-d783a73fbc66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.654103 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51419d9f-21f9-48d4-8bdf-d783a73fbc66-kube-api-access-w8phk" (OuterVolumeSpecName: "kube-api-access-w8phk") pod "51419d9f-21f9-48d4-8bdf-d783a73fbc66" (UID: "51419d9f-21f9-48d4-8bdf-d783a73fbc66"). InnerVolumeSpecName "kube-api-access-w8phk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.709011 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51419d9f-21f9-48d4-8bdf-d783a73fbc66" (UID: "51419d9f-21f9-48d4-8bdf-d783a73fbc66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.747212 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.747257 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8phk\" (UniqueName: \"kubernetes.io/projected/51419d9f-21f9-48d4-8bdf-d783a73fbc66-kube-api-access-w8phk\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.747269 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51419d9f-21f9-48d4-8bdf-d783a73fbc66-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.813145 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb5qs" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.813163 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerDied","Data":"54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4"} Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.813247 4858 generic.go:334] "Generic (PLEG): container finished" podID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerID="54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4" exitCode=0 Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.813516 4858 scope.go:117] "RemoveContainer" containerID="54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.813309 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb5qs" event={"ID":"51419d9f-21f9-48d4-8bdf-d783a73fbc66","Type":"ContainerDied","Data":"31eee0df96126161c1916fb3f5aaa04fcf2e18d460ec4607e57cb6fafb1bd740"} Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.843396 4858 scope.go:117] "RemoveContainer" containerID="c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.845632 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fb5qs"] Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.864703 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fb5qs"] Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.872088 4858 scope.go:117] "RemoveContainer" containerID="29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.916922 4858 scope.go:117] "RemoveContainer" containerID="54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4" Dec 05 16:27:46 crc kubenswrapper[4858]: E1205 16:27:46.918368 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4\": container with ID starting with 54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4 not found: ID does not exist" containerID="54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.918573 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4"} err="failed to get container status \"54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4\": rpc error: code = NotFound desc = could not find container \"54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4\": container with ID starting with 54ea546385db2721790e9eebd0caf58bb1a86a519f118ed021ecde9d48947ea4 not found: ID does not exist" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.918638 4858 scope.go:117] "RemoveContainer" containerID="c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845" Dec 05 16:27:46 crc kubenswrapper[4858]: E1205 16:27:46.919271 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845\": container with ID starting with c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845 not found: ID does not exist" containerID="c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.919312 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845"} err="failed to get container status \"c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845\": rpc error: code = NotFound desc = could not find container \"c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845\": container with ID starting with c8fa115f23ec0499a967ae230f0446965df2e40c1003f0f6085bcd734e0b0845 not found: ID does not exist" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.919338 4858 scope.go:117] "RemoveContainer" containerID="29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b" Dec 05 16:27:46 crc kubenswrapper[4858]: E1205 16:27:46.919605 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b\": container with ID starting with 29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b not found: ID does not exist" containerID="29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b" Dec 05 16:27:46 crc kubenswrapper[4858]: I1205 16:27:46.919636 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b"} err="failed to get container status \"29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b\": rpc error: code = NotFound desc = could not find container \"29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b\": container with ID starting with 29f31616ee9fdcf50369689160568a73da9d399257e2b25d3d33f76c380d127b not found: ID does not exist" Dec 05 16:27:47 crc kubenswrapper[4858]: I1205 16:27:47.323761 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x42mk" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" probeResult="failure" output=< Dec 05 16:27:47 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:27:47 crc kubenswrapper[4858]: > Dec 05 16:27:47 crc kubenswrapper[4858]: I1205 16:27:47.759749 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:47 crc kubenswrapper[4858]: I1205 16:27:47.856292 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:47 crc kubenswrapper[4858]: I1205 16:27:47.911317 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" path="/var/lib/kubelet/pods/51419d9f-21f9-48d4-8bdf-d783a73fbc66/volumes" Dec 05 16:27:48 crc kubenswrapper[4858]: I1205 16:27:48.737545 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7prnm"] Dec 05 16:27:48 crc kubenswrapper[4858]: I1205 16:27:48.874630 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7prnm" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="registry-server" containerID="cri-o://62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845" gracePeriod=2 Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.353841 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.500391 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7676f\" (UniqueName: \"kubernetes.io/projected/acf9f81d-858c-4c52-862b-b952ed0fea88-kube-api-access-7676f\") pod \"acf9f81d-858c-4c52-862b-b952ed0fea88\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.500537 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-utilities\") pod \"acf9f81d-858c-4c52-862b-b952ed0fea88\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.500618 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-catalog-content\") pod \"acf9f81d-858c-4c52-862b-b952ed0fea88\" (UID: \"acf9f81d-858c-4c52-862b-b952ed0fea88\") " Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.502319 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-utilities" (OuterVolumeSpecName: "utilities") pod "acf9f81d-858c-4c52-862b-b952ed0fea88" (UID: "acf9f81d-858c-4c52-862b-b952ed0fea88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.508352 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acf9f81d-858c-4c52-862b-b952ed0fea88-kube-api-access-7676f" (OuterVolumeSpecName: "kube-api-access-7676f") pod "acf9f81d-858c-4c52-862b-b952ed0fea88" (UID: "acf9f81d-858c-4c52-862b-b952ed0fea88"). InnerVolumeSpecName "kube-api-access-7676f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.563532 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acf9f81d-858c-4c52-862b-b952ed0fea88" (UID: "acf9f81d-858c-4c52-862b-b952ed0fea88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.603049 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7676f\" (UniqueName: \"kubernetes.io/projected/acf9f81d-858c-4c52-862b-b952ed0fea88-kube-api-access-7676f\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.603082 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.603091 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acf9f81d-858c-4c52-862b-b952ed0fea88-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.892617 4858 generic.go:334] "Generic (PLEG): container finished" podID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerID="62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845" exitCode=0 Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.892762 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerDied","Data":"62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845"} Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.893304 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7prnm" event={"ID":"acf9f81d-858c-4c52-862b-b952ed0fea88","Type":"ContainerDied","Data":"31c531ee6268fb124c531b499ec97d5e33447c03bd35e3d4205e216c9aecc4e6"} Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.893344 4858 scope.go:117] "RemoveContainer" containerID="62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.893023 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7prnm" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.900939 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:27:49 crc kubenswrapper[4858]: E1205 16:27:49.901461 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.945187 4858 scope.go:117] "RemoveContainer" containerID="0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.969696 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7prnm"] Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.974994 4858 scope.go:117] "RemoveContainer" containerID="fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c" Dec 05 16:27:49 crc kubenswrapper[4858]: I1205 16:27:49.980154 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7prnm"] Dec 05 16:27:50 crc kubenswrapper[4858]: I1205 16:27:50.020796 4858 scope.go:117] "RemoveContainer" containerID="62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845" Dec 05 16:27:50 crc kubenswrapper[4858]: E1205 16:27:50.021369 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845\": container with ID starting with 62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845 not found: ID does not exist" containerID="62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845" Dec 05 16:27:50 crc kubenswrapper[4858]: I1205 16:27:50.021411 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845"} err="failed to get container status \"62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845\": rpc error: code = NotFound desc = could not find container \"62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845\": container with ID starting with 62176e9fa1109ee3f1643e8d6628373fd2f129e64eb4b701e26ff40b1d179845 not found: ID does not exist" Dec 05 16:27:50 crc kubenswrapper[4858]: I1205 16:27:50.021437 4858 scope.go:117] "RemoveContainer" containerID="0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c" Dec 05 16:27:50 crc kubenswrapper[4858]: E1205 16:27:50.021768 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c\": container with ID starting with 0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c not found: ID does not exist" containerID="0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c" Dec 05 16:27:50 crc kubenswrapper[4858]: I1205 16:27:50.021853 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c"} err="failed to get container status \"0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c\": rpc error: code = NotFound desc = could not find container \"0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c\": container with ID starting with 0cf6b1a59b9ff372f4c47dac0f3790935b8fe5c5cfcefbbe0c0f4240d6c7604c not found: ID does not exist" Dec 05 16:27:50 crc kubenswrapper[4858]: I1205 16:27:50.021902 4858 scope.go:117] "RemoveContainer" containerID="fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c" Dec 05 16:27:50 crc kubenswrapper[4858]: E1205 16:27:50.022303 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c\": container with ID starting with fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c not found: ID does not exist" containerID="fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c" Dec 05 16:27:50 crc kubenswrapper[4858]: I1205 16:27:50.022351 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c"} err="failed to get container status \"fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c\": rpc error: code = NotFound desc = could not find container \"fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c\": container with ID starting with fde820ce06e3b3705661b0f1bd5ee41f05afddcd0e5325ca9be621e863fc1e0c not found: ID does not exist" Dec 05 16:27:52 crc kubenswrapper[4858]: I1205 16:27:52.836066 4858 patch_prober.go:28] interesting pod/oauth-openshift-748578cd96-nlm54 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 16:27:52 crc kubenswrapper[4858]: I1205 16:27:52.838365 4858 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-748578cd96-nlm54" podUID="e81e683d-b55e-4076-b333-4e68d8caed3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 16:27:52 crc kubenswrapper[4858]: I1205 16:27:52.859978 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" path="/var/lib/kubelet/pods/acf9f81d-858c-4c52-862b-b952ed0fea88/volumes" Dec 05 16:27:56 crc kubenswrapper[4858]: I1205 16:27:56.326951 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:56 crc kubenswrapper[4858]: I1205 16:27:56.379492 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:56 crc kubenswrapper[4858]: I1205 16:27:56.564599 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x42mk"] Dec 05 16:27:57 crc kubenswrapper[4858]: I1205 16:27:57.884636 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x42mk" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" containerID="cri-o://412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c" gracePeriod=2 Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.447510 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.540257 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-utilities\") pod \"f5534599-bdf8-4438-9399-4b21e060472d\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.540349 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-catalog-content\") pod \"f5534599-bdf8-4438-9399-4b21e060472d\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.540395 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9rrk\" (UniqueName: \"kubernetes.io/projected/f5534599-bdf8-4438-9399-4b21e060472d-kube-api-access-s9rrk\") pod \"f5534599-bdf8-4438-9399-4b21e060472d\" (UID: \"f5534599-bdf8-4438-9399-4b21e060472d\") " Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.541559 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-utilities" (OuterVolumeSpecName: "utilities") pod "f5534599-bdf8-4438-9399-4b21e060472d" (UID: "f5534599-bdf8-4438-9399-4b21e060472d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.549147 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.549475 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5534599-bdf8-4438-9399-4b21e060472d-kube-api-access-s9rrk" (OuterVolumeSpecName: "kube-api-access-s9rrk") pod "f5534599-bdf8-4438-9399-4b21e060472d" (UID: "f5534599-bdf8-4438-9399-4b21e060472d"). InnerVolumeSpecName "kube-api-access-s9rrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.642306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5534599-bdf8-4438-9399-4b21e060472d" (UID: "f5534599-bdf8-4438-9399-4b21e060472d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.650551 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5534599-bdf8-4438-9399-4b21e060472d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.650578 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9rrk\" (UniqueName: \"kubernetes.io/projected/f5534599-bdf8-4438-9399-4b21e060472d-kube-api-access-s9rrk\") on node \"crc\" DevicePath \"\"" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.895133 4858 generic.go:334] "Generic (PLEG): container finished" podID="f5534599-bdf8-4438-9399-4b21e060472d" containerID="412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c" exitCode=0 Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.895197 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x42mk" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.895230 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerDied","Data":"412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c"} Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.896747 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x42mk" event={"ID":"f5534599-bdf8-4438-9399-4b21e060472d","Type":"ContainerDied","Data":"fa7415d6e20e0a7e8eb7109d10ed5ef4970f3b76d7bdd9b841da86ea66020c6d"} Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.896768 4858 scope.go:117] "RemoveContainer" containerID="412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.935129 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x42mk"] Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.938702 4858 scope.go:117] "RemoveContainer" containerID="758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.960730 4858 scope.go:117] "RemoveContainer" containerID="f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7" Dec 05 16:27:58 crc kubenswrapper[4858]: I1205 16:27:58.962151 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x42mk"] Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.025969 4858 scope.go:117] "RemoveContainer" containerID="412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c" Dec 05 16:27:59 crc kubenswrapper[4858]: E1205 16:27:59.026453 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c\": container with ID starting with 412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c not found: ID does not exist" containerID="412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c" Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.026503 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c"} err="failed to get container status \"412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c\": rpc error: code = NotFound desc = could not find container \"412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c\": container with ID starting with 412cde7bb852a9687444496528003d13cdd4c7f25fc5c986ac19b609c97b6a4c not found: ID does not exist" Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.026532 4858 scope.go:117] "RemoveContainer" containerID="758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc" Dec 05 16:27:59 crc kubenswrapper[4858]: E1205 16:27:59.026794 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc\": container with ID starting with 758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc not found: ID does not exist" containerID="758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc" Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.026815 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc"} err="failed to get container status \"758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc\": rpc error: code = NotFound desc = could not find container \"758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc\": container with ID starting with 758f4d7e446ea823bcd21827b815066cedd8f0a36c3edc65ea783cfc1276b4bc not found: ID does not exist" Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.026862 4858 scope.go:117] "RemoveContainer" containerID="f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7" Dec 05 16:27:59 crc kubenswrapper[4858]: E1205 16:27:59.027122 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7\": container with ID starting with f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7 not found: ID does not exist" containerID="f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7" Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.027143 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7"} err="failed to get container status \"f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7\": rpc error: code = NotFound desc = could not find container \"f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7\": container with ID starting with f6bcd535a017d5e54a2eaab12cbe6065451aaf680a1f051106ba413de31d0db7 not found: ID does not exist" Dec 05 16:27:59 crc kubenswrapper[4858]: I1205 16:27:59.922765 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5534599-bdf8-4438-9399-4b21e060472d" path="/var/lib/kubelet/pods/f5534599-bdf8-4438-9399-4b21e060472d/volumes" Dec 05 16:28:01 crc kubenswrapper[4858]: I1205 16:28:01.906129 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:28:01 crc kubenswrapper[4858]: E1205 16:28:01.906722 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:28:15 crc kubenswrapper[4858]: I1205 16:28:15.899221 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:28:15 crc kubenswrapper[4858]: E1205 16:28:15.901960 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.387177 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qbxst"] Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.390500 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="extract-utilities" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.390674 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="extract-utilities" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.390754 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="extract-utilities" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.390875 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="extract-utilities" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.390969 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391035 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.391124 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="extract-utilities" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391192 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="extract-utilities" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.391286 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="extract-content" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391354 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="extract-content" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.391423 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391492 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.391565 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="extract-content" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391632 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="extract-content" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.391728 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391782 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: E1205 16:28:24.391896 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="extract-content" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.391968 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="extract-content" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.392548 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="51419d9f-21f9-48d4-8bdf-d783a73fbc66" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.392631 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf9f81d-858c-4c52-862b-b952ed0fea88" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.392693 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5534599-bdf8-4438-9399-4b21e060472d" containerName="registry-server" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.395671 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.424170 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbxst"] Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.550973 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2fq\" (UniqueName: \"kubernetes.io/projected/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-kube-api-access-zk2fq\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.551081 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-utilities\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.551129 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-catalog-content\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.653067 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2fq\" (UniqueName: \"kubernetes.io/projected/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-kube-api-access-zk2fq\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.653404 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-utilities\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.653474 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-catalog-content\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.655616 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-catalog-content\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.655962 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-utilities\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.690194 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2fq\" (UniqueName: \"kubernetes.io/projected/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-kube-api-access-zk2fq\") pod \"redhat-marketplace-qbxst\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:24 crc kubenswrapper[4858]: I1205 16:28:24.728608 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:25 crc kubenswrapper[4858]: I1205 16:28:25.394087 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbxst"] Dec 05 16:28:26 crc kubenswrapper[4858]: I1205 16:28:26.152207 4858 generic.go:334] "Generic (PLEG): container finished" podID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerID="facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e" exitCode=0 Dec 05 16:28:26 crc kubenswrapper[4858]: I1205 16:28:26.152752 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerDied","Data":"facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e"} Dec 05 16:28:26 crc kubenswrapper[4858]: I1205 16:28:26.152889 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerStarted","Data":"0de2d33f8540f5bed2b43c85cce04a5fd6952f604b1497478bcf4febc29bdd09"} Dec 05 16:28:27 crc kubenswrapper[4858]: I1205 16:28:27.162190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerStarted","Data":"8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2"} Dec 05 16:28:27 crc kubenswrapper[4858]: I1205 16:28:27.899698 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:28:27 crc kubenswrapper[4858]: E1205 16:28:27.899986 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:28:28 crc kubenswrapper[4858]: I1205 16:28:28.172193 4858 generic.go:334] "Generic (PLEG): container finished" podID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerID="8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2" exitCode=0 Dec 05 16:28:28 crc kubenswrapper[4858]: I1205 16:28:28.173190 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerDied","Data":"8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2"} Dec 05 16:28:29 crc kubenswrapper[4858]: I1205 16:28:29.184311 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerStarted","Data":"64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58"} Dec 05 16:28:29 crc kubenswrapper[4858]: I1205 16:28:29.217079 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qbxst" podStartSLOduration=2.637321002 podStartE2EDuration="5.216298722s" podCreationTimestamp="2025-12-05 16:28:24 +0000 UTC" firstStartedPulling="2025-12-05 16:28:26.154444465 +0000 UTC m=+9114.702042604" lastFinishedPulling="2025-12-05 16:28:28.733422185 +0000 UTC m=+9117.281020324" observedRunningTime="2025-12-05 16:28:29.205656325 +0000 UTC m=+9117.753254494" watchObservedRunningTime="2025-12-05 16:28:29.216298722 +0000 UTC m=+9117.763896871" Dec 05 16:28:34 crc kubenswrapper[4858]: I1205 16:28:34.729324 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:34 crc kubenswrapper[4858]: I1205 16:28:34.730965 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:35 crc kubenswrapper[4858]: I1205 16:28:35.779241 4858 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qbxst" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="registry-server" probeResult="failure" output=< Dec 05 16:28:35 crc kubenswrapper[4858]: timeout: failed to connect service ":50051" within 1s Dec 05 16:28:35 crc kubenswrapper[4858]: > Dec 05 16:28:42 crc kubenswrapper[4858]: I1205 16:28:42.901201 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:28:42 crc kubenswrapper[4858]: E1205 16:28:42.902594 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:28:44 crc kubenswrapper[4858]: I1205 16:28:44.780241 4858 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:44 crc kubenswrapper[4858]: I1205 16:28:44.846687 4858 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:45 crc kubenswrapper[4858]: I1205 16:28:45.031482 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbxst"] Dec 05 16:28:46 crc kubenswrapper[4858]: I1205 16:28:46.338750 4858 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qbxst" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="registry-server" containerID="cri-o://64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58" gracePeriod=2 Dec 05 16:28:46 crc kubenswrapper[4858]: E1205 16:28:46.502249 4858 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb3c48b_5f0b_4212_9e66_d3ede95333ec.slice/crio-64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58.scope\": RecentStats: unable to find data in memory cache]" Dec 05 16:28:46 crc kubenswrapper[4858]: I1205 16:28:46.990021 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.002437 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-utilities\") pod \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.002542 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk2fq\" (UniqueName: \"kubernetes.io/projected/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-kube-api-access-zk2fq\") pod \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.002881 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-catalog-content\") pod \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\" (UID: \"ebb3c48b-5f0b-4212-9e66-d3ede95333ec\") " Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.004368 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-utilities" (OuterVolumeSpecName: "utilities") pod "ebb3c48b-5f0b-4212-9e66-d3ede95333ec" (UID: "ebb3c48b-5f0b-4212-9e66-d3ede95333ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.006682 4858 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.024740 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-kube-api-access-zk2fq" (OuterVolumeSpecName: "kube-api-access-zk2fq") pod "ebb3c48b-5f0b-4212-9e66-d3ede95333ec" (UID: "ebb3c48b-5f0b-4212-9e66-d3ede95333ec"). InnerVolumeSpecName "kube-api-access-zk2fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.041302 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebb3c48b-5f0b-4212-9e66-d3ede95333ec" (UID: "ebb3c48b-5f0b-4212-9e66-d3ede95333ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.107888 4858 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.107924 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk2fq\" (UniqueName: \"kubernetes.io/projected/ebb3c48b-5f0b-4212-9e66-d3ede95333ec-kube-api-access-zk2fq\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.348850 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerDied","Data":"64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58"} Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.349602 4858 scope.go:117] "RemoveContainer" containerID="64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.348794 4858 generic.go:334] "Generic (PLEG): container finished" podID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerID="64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58" exitCode=0 Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.348864 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qbxst" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.349778 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qbxst" event={"ID":"ebb3c48b-5f0b-4212-9e66-d3ede95333ec","Type":"ContainerDied","Data":"0de2d33f8540f5bed2b43c85cce04a5fd6952f604b1497478bcf4febc29bdd09"} Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.382901 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbxst"] Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.389569 4858 scope.go:117] "RemoveContainer" containerID="8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.397196 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qbxst"] Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.417685 4858 scope.go:117] "RemoveContainer" containerID="facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.462437 4858 scope.go:117] "RemoveContainer" containerID="64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58" Dec 05 16:28:47 crc kubenswrapper[4858]: E1205 16:28:47.463602 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58\": container with ID starting with 64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58 not found: ID does not exist" containerID="64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.463974 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58"} err="failed to get container status \"64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58\": rpc error: code = NotFound desc = could not find container \"64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58\": container with ID starting with 64dcf46894d9aac4f0fa8e52529d55845b61ec82a7bb2f86e68f2e33205d8b58 not found: ID does not exist" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.464101 4858 scope.go:117] "RemoveContainer" containerID="8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2" Dec 05 16:28:47 crc kubenswrapper[4858]: E1205 16:28:47.464428 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2\": container with ID starting with 8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2 not found: ID does not exist" containerID="8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.464447 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2"} err="failed to get container status \"8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2\": rpc error: code = NotFound desc = could not find container \"8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2\": container with ID starting with 8c72f1993830d1630ac0593865942d18376e7bba333d7bba317cfaccf7f2a9b2 not found: ID does not exist" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.464464 4858 scope.go:117] "RemoveContainer" containerID="facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e" Dec 05 16:28:47 crc kubenswrapper[4858]: E1205 16:28:47.464929 4858 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e\": container with ID starting with facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e not found: ID does not exist" containerID="facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.464964 4858 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e"} err="failed to get container status \"facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e\": rpc error: code = NotFound desc = could not find container \"facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e\": container with ID starting with facab00ed38a097b745cf72b7c7fd959eb3b908b795dc4488ac025e11257e58e not found: ID does not exist" Dec 05 16:28:47 crc kubenswrapper[4858]: I1205 16:28:47.909804 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" path="/var/lib/kubelet/pods/ebb3c48b-5f0b-4212-9e66-d3ede95333ec/volumes" Dec 05 16:28:56 crc kubenswrapper[4858]: I1205 16:28:56.429427 4858 generic.go:334] "Generic (PLEG): container finished" podID="2dc2f8c9-4ac5-4830-bf63-168798f46840" containerID="9433a658d406b33b0a6180ff141b6227da9fe9cb941dc525a912a73b30acdf8e" exitCode=0 Dec 05 16:28:56 crc kubenswrapper[4858]: I1205 16:28:56.429508 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2dc2f8c9-4ac5-4830-bf63-168798f46840","Type":"ContainerDied","Data":"9433a658d406b33b0a6180ff141b6227da9fe9cb941dc525a912a73b30acdf8e"} Dec 05 16:28:56 crc kubenswrapper[4858]: I1205 16:28:56.899777 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:28:56 crc kubenswrapper[4858]: E1205 16:28:56.900447 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.912868 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949434 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949536 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ssh-key\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949577 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949602 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-workdir\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949656 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config-secret\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949676 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ca-certs\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949751 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-temporary\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949799 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5qlg\" (UniqueName: \"kubernetes.io/projected/2dc2f8c9-4ac5-4830-bf63-168798f46840-kube-api-access-m5qlg\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.949815 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-config-data\") pod \"2dc2f8c9-4ac5-4830-bf63-168798f46840\" (UID: \"2dc2f8c9-4ac5-4830-bf63-168798f46840\") " Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.952134 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.952522 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-config-data" (OuterVolumeSpecName: "config-data") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.962101 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.972293 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc2f8c9-4ac5-4830-bf63-168798f46840-kube-api-access-m5qlg" (OuterVolumeSpecName: "kube-api-access-m5qlg") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "kube-api-access-m5qlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:28:57 crc kubenswrapper[4858]: I1205 16:28:57.980306 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.002339 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.027700 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.028931 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.040519 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2dc2f8c9-4ac5-4830-bf63-168798f46840" (UID: "2dc2f8c9-4ac5-4830-bf63-168798f46840"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052290 4858 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052877 4858 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052919 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052935 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052948 4858 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2dc2f8c9-4ac5-4830-bf63-168798f46840-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052958 4858 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2dc2f8c9-4ac5-4830-bf63-168798f46840-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052989 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5qlg\" (UniqueName: \"kubernetes.io/projected/2dc2f8c9-4ac5-4830-bf63-168798f46840-kube-api-access-m5qlg\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.052999 4858 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.053008 4858 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2dc2f8c9-4ac5-4830-bf63-168798f46840-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.072812 4858 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.154632 4858 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.485497 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"2dc2f8c9-4ac5-4830-bf63-168798f46840","Type":"ContainerDied","Data":"38cd4dc57a7aa42445690485da9b774c015810598856edb800825ab0872dee87"} Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.485861 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38cd4dc57a7aa42445690485da9b774c015810598856edb800825ab0872dee87" Dec 05 16:28:58 crc kubenswrapper[4858]: I1205 16:28:58.485562 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.186112 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 16:29:05 crc kubenswrapper[4858]: E1205 16:29:05.187479 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc2f8c9-4ac5-4830-bf63-168798f46840" containerName="tempest-tests-tempest-tests-runner" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.187500 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc2f8c9-4ac5-4830-bf63-168798f46840" containerName="tempest-tests-tempest-tests-runner" Dec 05 16:29:05 crc kubenswrapper[4858]: E1205 16:29:05.187526 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="extract-content" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.187532 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="extract-content" Dec 05 16:29:05 crc kubenswrapper[4858]: E1205 16:29:05.187546 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="extract-utilities" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.187553 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="extract-utilities" Dec 05 16:29:05 crc kubenswrapper[4858]: E1205 16:29:05.187569 4858 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="registry-server" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.187575 4858 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="registry-server" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.188123 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb3c48b-5f0b-4212-9e66-d3ede95333ec" containerName="registry-server" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.188140 4858 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc2f8c9-4ac5-4830-bf63-168798f46840" containerName="tempest-tests-tempest-tests-runner" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.189641 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.192851 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-xzq5q" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.206939 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.329362 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/db3095a1-e6c3-4d78-9852-701ed8bb105f-kube-api-access-d5rsd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.329460 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.432005 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/db3095a1-e6c3-4d78-9852-701ed8bb105f-kube-api-access-d5rsd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.432108 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.433239 4858 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.458529 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/db3095a1-e6c3-4d78-9852-701ed8bb105f-kube-api-access-d5rsd\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.460895 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"db3095a1-e6c3-4d78-9852-701ed8bb105f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:05 crc kubenswrapper[4858]: I1205 16:29:05.531620 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 16:29:06 crc kubenswrapper[4858]: I1205 16:29:06.057085 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 16:29:06 crc kubenswrapper[4858]: W1205 16:29:06.065557 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb3095a1_e6c3_4d78_9852_701ed8bb105f.slice/crio-b9d9c40b71377122f8fd8e5f97678363681086e6fa9da5dd17b3646b8974e900 WatchSource:0}: Error finding container b9d9c40b71377122f8fd8e5f97678363681086e6fa9da5dd17b3646b8974e900: Status 404 returned error can't find the container with id b9d9c40b71377122f8fd8e5f97678363681086e6fa9da5dd17b3646b8974e900 Dec 05 16:29:06 crc kubenswrapper[4858]: I1205 16:29:06.561468 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"db3095a1-e6c3-4d78-9852-701ed8bb105f","Type":"ContainerStarted","Data":"b9d9c40b71377122f8fd8e5f97678363681086e6fa9da5dd17b3646b8974e900"} Dec 05 16:29:07 crc kubenswrapper[4858]: I1205 16:29:07.574161 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"db3095a1-e6c3-4d78-9852-701ed8bb105f","Type":"ContainerStarted","Data":"bce2b2785c2f5a682fc3d3f31c09c870d3e6cd32dac7cb406330004189cb8bb7"} Dec 05 16:29:07 crc kubenswrapper[4858]: I1205 16:29:07.599219 4858 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.6460611859999998 podStartE2EDuration="2.599186138s" podCreationTimestamp="2025-12-05 16:29:05 +0000 UTC" firstStartedPulling="2025-12-05 16:29:06.072227681 +0000 UTC m=+9154.619825820" lastFinishedPulling="2025-12-05 16:29:07.025352633 +0000 UTC m=+9155.572950772" observedRunningTime="2025-12-05 16:29:07.598291283 +0000 UTC m=+9156.145889482" watchObservedRunningTime="2025-12-05 16:29:07.599186138 +0000 UTC m=+9156.146784267" Dec 05 16:29:10 crc kubenswrapper[4858]: I1205 16:29:10.900436 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:29:10 crc kubenswrapper[4858]: E1205 16:29:10.901868 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:29:23 crc kubenswrapper[4858]: I1205 16:29:23.898990 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:29:23 crc kubenswrapper[4858]: E1205 16:29:23.899717 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:29:38 crc kubenswrapper[4858]: I1205 16:29:38.900335 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:29:38 crc kubenswrapper[4858]: E1205 16:29:38.901445 4858 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-vtgkn_openshift-machine-config-operator(2ab8742a-625e-4bb8-9329-31f39a34fe48)\"" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" Dec 05 16:29:49 crc kubenswrapper[4858]: I1205 16:29:49.899985 4858 scope.go:117] "RemoveContainer" containerID="cf1c4bb9fe9667bc334e9d2345e33462d89cb3f9ccf0105c009c5aba11c1f501" Dec 05 16:29:50 crc kubenswrapper[4858]: I1205 16:29:50.984887 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" event={"ID":"2ab8742a-625e-4bb8-9329-31f39a34fe48","Type":"ContainerStarted","Data":"2d096e33b20108f11db2cb816b6feae3de909042f5766493ee7b1dfc4edc1324"} Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.206608 4858 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg"] Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.208492 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.219733 4858 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.220096 4858 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.237573 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg"] Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.381313 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jff5m\" (UniqueName: \"kubernetes.io/projected/f6d2ea36-3229-469b-ac40-23465148b62e-kube-api-access-jff5m\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.381798 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6d2ea36-3229-469b-ac40-23465148b62e-secret-volume\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.381977 4858 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d2ea36-3229-469b-ac40-23465148b62e-config-volume\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.483605 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d2ea36-3229-469b-ac40-23465148b62e-config-volume\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.483805 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jff5m\" (UniqueName: \"kubernetes.io/projected/f6d2ea36-3229-469b-ac40-23465148b62e-kube-api-access-jff5m\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.483902 4858 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6d2ea36-3229-469b-ac40-23465148b62e-secret-volume\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.495455 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6d2ea36-3229-469b-ac40-23465148b62e-secret-volume\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.497106 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d2ea36-3229-469b-ac40-23465148b62e-config-volume\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.501302 4858 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jff5m\" (UniqueName: \"kubernetes.io/projected/f6d2ea36-3229-469b-ac40-23465148b62e-kube-api-access-jff5m\") pod \"collect-profiles-29415870-sqrvg\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:00 crc kubenswrapper[4858]: I1205 16:30:00.534744 4858 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:01 crc kubenswrapper[4858]: I1205 16:30:01.011561 4858 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg"] Dec 05 16:30:01 crc kubenswrapper[4858]: W1205 16:30:01.020465 4858 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6d2ea36_3229_469b_ac40_23465148b62e.slice/crio-bc3fc137f18d130ee4a6b718d886e7c4316e73dfe8745f489060537af1ea794b WatchSource:0}: Error finding container bc3fc137f18d130ee4a6b718d886e7c4316e73dfe8745f489060537af1ea794b: Status 404 returned error can't find the container with id bc3fc137f18d130ee4a6b718d886e7c4316e73dfe8745f489060537af1ea794b Dec 05 16:30:01 crc kubenswrapper[4858]: I1205 16:30:01.112186 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" event={"ID":"f6d2ea36-3229-469b-ac40-23465148b62e","Type":"ContainerStarted","Data":"bc3fc137f18d130ee4a6b718d886e7c4316e73dfe8745f489060537af1ea794b"} Dec 05 16:30:02 crc kubenswrapper[4858]: I1205 16:30:02.123394 4858 generic.go:334] "Generic (PLEG): container finished" podID="f6d2ea36-3229-469b-ac40-23465148b62e" containerID="d6e09240bfb92212abd58447ea0657a4f6ef4d8fcaba817c2e6ca68ca987489e" exitCode=0 Dec 05 16:30:02 crc kubenswrapper[4858]: I1205 16:30:02.123472 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" event={"ID":"f6d2ea36-3229-469b-ac40-23465148b62e","Type":"ContainerDied","Data":"d6e09240bfb92212abd58447ea0657a4f6ef4d8fcaba817c2e6ca68ca987489e"} Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.528309 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.657752 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6d2ea36-3229-469b-ac40-23465148b62e-secret-volume\") pod \"f6d2ea36-3229-469b-ac40-23465148b62e\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.657926 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d2ea36-3229-469b-ac40-23465148b62e-config-volume\") pod \"f6d2ea36-3229-469b-ac40-23465148b62e\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.658034 4858 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jff5m\" (UniqueName: \"kubernetes.io/projected/f6d2ea36-3229-469b-ac40-23465148b62e-kube-api-access-jff5m\") pod \"f6d2ea36-3229-469b-ac40-23465148b62e\" (UID: \"f6d2ea36-3229-469b-ac40-23465148b62e\") " Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.658646 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6d2ea36-3229-469b-ac40-23465148b62e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f6d2ea36-3229-469b-ac40-23465148b62e" (UID: "f6d2ea36-3229-469b-ac40-23465148b62e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.665111 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6d2ea36-3229-469b-ac40-23465148b62e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f6d2ea36-3229-469b-ac40-23465148b62e" (UID: "f6d2ea36-3229-469b-ac40-23465148b62e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.665519 4858 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d2ea36-3229-469b-ac40-23465148b62e-kube-api-access-jff5m" (OuterVolumeSpecName: "kube-api-access-jff5m") pod "f6d2ea36-3229-469b-ac40-23465148b62e" (UID: "f6d2ea36-3229-469b-ac40-23465148b62e"). InnerVolumeSpecName "kube-api-access-jff5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.760202 4858 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6d2ea36-3229-469b-ac40-23465148b62e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.760240 4858 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d2ea36-3229-469b-ac40-23465148b62e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 16:30:03 crc kubenswrapper[4858]: I1205 16:30:03.760254 4858 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jff5m\" (UniqueName: \"kubernetes.io/projected/f6d2ea36-3229-469b-ac40-23465148b62e-kube-api-access-jff5m\") on node \"crc\" DevicePath \"\"" Dec 05 16:30:04 crc kubenswrapper[4858]: I1205 16:30:04.140582 4858 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" event={"ID":"f6d2ea36-3229-469b-ac40-23465148b62e","Type":"ContainerDied","Data":"bc3fc137f18d130ee4a6b718d886e7c4316e73dfe8745f489060537af1ea794b"} Dec 05 16:30:04 crc kubenswrapper[4858]: I1205 16:30:04.140617 4858 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3fc137f18d130ee4a6b718d886e7c4316e73dfe8745f489060537af1ea794b" Dec 05 16:30:04 crc kubenswrapper[4858]: I1205 16:30:04.140668 4858 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415870-sqrvg" Dec 05 16:30:04 crc kubenswrapper[4858]: I1205 16:30:04.599872 4858 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr"] Dec 05 16:30:04 crc kubenswrapper[4858]: I1205 16:30:04.608887 4858 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415825-2h8xr"] Dec 05 16:30:05 crc kubenswrapper[4858]: I1205 16:30:05.939056 4858 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7142f7b-2a87-49b5-b3da-a652639e3a83" path="/var/lib/kubelet/pods/f7142f7b-2a87-49b5-b3da-a652639e3a83/volumes" Dec 05 16:31:04 crc kubenswrapper[4858]: I1205 16:31:04.481182 4858 scope.go:117] "RemoveContainer" containerID="f796f7192608175a37acd673ba0838fa5c3b5e093a168ad0a552cd2a5e3e8492" Dec 05 16:32:14 crc kubenswrapper[4858]: I1205 16:32:14.760603 4858 patch_prober.go:28] interesting pod/machine-config-daemon-vtgkn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 16:32:14 crc kubenswrapper[4858]: I1205 16:32:14.761535 4858 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vtgkn" podUID="2ab8742a-625e-4bb8-9329-31f39a34fe48" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"